Book 1
Book 1
Operating Systems
Indira Gandhi National Open University
School of Computer and Information
Sciences (SOCIS)
UT/OUTP SECURITY
INP AGEMEUT
M AN NT
Block Title 1
MCS-203
Operating Systems
Indira Gandhi
National Open University
School of Computer and
Information Sciences
Block
1
INTRODUCTION TO OPERATING SYSTEMS
AND PROCESS MANAGEMENT
UNIT 1
Operating System: An Overview 7
UNIT 2
Processes 29
UNIT 3
Interprocess Communication and Synchronization 57
UNIT 4
Deadlocks 81
PROGRAMME/COURSE DESIGN COMMITTEE
Prof. Sanjeev K. Aggarwal, IIT, Kanpur Shri Sanjeev Thakur
Shri Navneet Aggarwal Amity School of Computer Sciences, Noida
Trinity BPM, New Delhi Shri Amrit Nath Thulal
Prof. M. Balakrishnan, IIT, Delhi Amity School of Engineering and Technology
Prof. Pandurangan, C., IIT, Madras New Delhi
Ms. Bhoomi Gupta Dr. Om Vikas (Retd), Ex-Sr. Director,
Sirifort College of Computer and Technology Ministry of ICT, Delhi
Management, New Delhi Shri Vishwakarma
Shri Manoj Kumar Gupta Amity School of Engineering and Technology
Keane India Ltd., New Delhi New Delhi
Shri Sachin Gupta Prof (Retd) S. K. Gupta, IIT Delhi
Delhi Institute of Advanced Studies, Prof. T.V. Vijaya Kumar, Dean, SC&SS,
New Delhi JNU, New Delhi
Prof. Harish Karnick, IIT, Kanpur Prof. Ela Kumar, Dean, CSE, IGDTUW, Delhi
Shri Anil Kumar Prof. Gayatri Dhingra, GVMITM, Sonipat
Amity School of Engineering and Technology
New Delhi Sh. Milind Mahajani Vice President,
Impressico Business Solutions, Noida, UP
Dr. Kapil Kumar, IIMT, Meerut
D r . Sachin Kumar, Prof. V. V. Subrahmanyam, Director
SOCIS, New Delhi
CCS University, Meerut
Prof. P. V. Suresh,
Ms. Manjulata
Amity School of Engineering and Technology SOCIS, IGNOU, New Delhi
New Delhi Dr. Shashi Bhushan
Shri Ajay Rana SOCIS, IGNOU, New Delhi
Amity School of Computer Sciences, Noida Shri Akshay Kumar, Associate Prof.
Dr. Divya Sharma SOCIS, IGNOU, New Delhi
Bharati College, Delhi Shri M. P. Mishra, Associate Prof.
Shri Neeraj Sharma SOCIS, IGNOU, New Delhi
Havard Institute of Management Technology Dr. Sudhansh Sharma, Asst. Prof
Noida SOCIS, IGNOU, New Delhi
PRINT PRODUCTION
Mr. Tilak Raj
Assistant Registrar,
MPDD, IGNOU, New Delhi
July, 2021
© Indira Gandhi National Open University, 2021
ISBN : 978-93-91229-15-3
All rights reserved. No part of this work may be reproduced in any form, by mimeograph or any other
means, without permission in writing from the Indira Gandhi National Open University.
Further information on the Indira Gandhi National Open University courses may be obtained from the
University’s ofce at Maidan Garhi, New Delhi-110068.
Printed and published on behalf of the Indira Gandhi National Open University, New Delhi by the
Registrar, MPDD, IGNOU, New Delhi
Laser Typesetting : Akashdeep Printers, 20-Ansari Road, Daryaganj, New Delhi-110002
Printed at : Akashdeep Printers, 20-Ansari Road, Daryaganj, New Delhi-110002
COURSE INTRODUCTION
The most fundamental of all system programs is the operating system, which controls
computer’s resources and provides the base on which application programs may be
written. A modern operating system manages one or more processors, processes,
files, a hierarchy of memory, clocks, disks, network interfaces, I/O devices, protection
and security. The operating system’s purpose is to provide an orderly and controlled
allocation of all of these resources among the programs competing for them.
This is a core course on Operating Systems covering the broader perspective of its
operation. Each unit includes presentation of the contents with examples and solved
problems. This course introduces the key mechanisms of the operating system like
process management, memory management, file systems, I/O management and
protection and security. The whole course is organised into 4 Blocks.
Block 1 covers the Introduction to Operating Systems, the role of processes, their
creation, process scheduling, interprocess communication and synchronization and
deadlocks.
Block 2 covers the study of storage management: static and dynamic allocation, paging
and segmentation, virtual memory and demand paging, page replacement of algorithms
and memory caches and their effect on performance followed by the File management
concepts, such as input/output hardware and software, files, directories and access
mechanisms, file allocation and access algorithms and performance. Finally, the
increasingly important areas of protection and security are discussed like the goals,
authentication, access mechanisms, protection domains, access control lists and
capabilities, and monitoring.
Block 3 covers the advanced topics in Operating Systems like Multiprocessor systems.
Distributed systems and Mobile Operating Systems.
Block 4 covers the case studies on Windows 10, Linux, Android and iOS.
There is a lab component associated with this course (i.e., Section-1 Windows 10 and
Section-2 Linux of MCSL-204 course).
BLOCK INTRODUCTION
The objective of this block is to familiarise you with the issues involved in the design
and implementation of modern operating systems. The course does not concentrate on
any particular operating system or hardware platform. The concepts discussed are
applicable to a variety of systems.
The block is organised into 4 units:
Unit 1 covers the introduction to operating systems;
Unit 2 covers the concept of processes, process states, process switch, threads, CPU
scheduling, various scheduling algorithms and their performance criteria;
Unit 3 covers the interprocess communication, concurrency and synchronization aspects
of the processes, and
Unit 4 covers the deadlocks, conditions for the deadlock, deadlock prevention,
avoidance and recovery.
Introduction to Operating
Systems and Process
Management
6
Operating System:
UNIT 1 OPERATING SYSTEM : AN An Overview
OVERVIEW
Structure
1.1 Introduction
1.2 Objectives
1.3 What is an Operating System?
1.4 Goals of an Operating System
1.5 Generations of Operating Systems
1.5.1 0th Generation
1.5.2 First Generation (1951-1956)
1.5.3 Second Generation (1956-1964)
1.5.4 Third Generation (1964-1979)
1.5.5 Fourth Generation (1979-Present)
1.6 Types of Operating Systems
1.6.1 Batch Processing Operating System
1.6.2 Time Sharing
1.6.3 Real Time Operating System (RTOS)
1.6.4 Multiprogramming Operating System
1.6.5 Multiprocessing System
1.6.6 Networking Operating System
1.6.7 Distributed Operating System
1.6.8 Operating Systems for Embedded Devices
1.7 Desirable Qualities of OS
1.8 Operating Systems : Some Examples
1.8.1 DOS
1.8.2 UNIX
1.8.3 Windows
1.8.4 Macintosh
1.9 Functions of OS
1.9.1 Process Management
1.9.2 Memory Management
1.9.3 Secondary Storage Management
1.9.4 I/O Management
1.9.5 File Management
1.9.6 Protection
1.9.7 Networking
1.9.8 Command Interpretation
1.10 Summary
1.11 Solutions/Answers
1.12 Further Readings 7
Introduction to Operating
Systems and Process 1.1 INTRODUCTION
Management
Computer software can be divided into two main categories: application software and
system software. Application software consists of the programs for performing tasks
particular to the machine’s utilization. This software is designed to solve a particular
problem for users. Examples of application software include spreadsheets, database
systems, desktop publishing systems, program development software, and games.
On the other hand, system software is more transparent and less noticed by the typical
computer user. This software provides a general programming environment in which
programmers can create specific applications to suit their needs. This environment
provides new functions that are not available at the hardware level and performs tasks
related to executing the application program. System software acts as an interface
between the hardware of the computer and the application software that users need to
run on the computer. The most important type of system software is the operating
system.
An Operating System (OS) is a collection of programs that acts as an interface
between a user of a computer and the computer hardware. The purpose of an operating
system is to provide an environment in which a user may execute the programs.
Operating Systems are viewed as resource managers. The main resource is the computer
hardware in the form of processors, storage, input/output devices, communication
devices, and data. Some of the operating system functions are: implementing the user
interface, sharing hardware among users, allowing users to share data among themselves,
preventing users from interfering with one another, scheduling resources among users,
facilitating input/output, recovering from errors, accounting for resource usage, facilitating
parallel operations, organizing data for secure and rapid access, and handling network
communications.
This unit presents the definition of the operating system, goals of the operating system,
generations of OS, different types of OS and functions of OS.
1.2 OBJECTIVES
After going through this unit, you should be able to:
understand the purpose of an operating system;
describe the general goals of an operating system;
discuss the evolution of operating systems;
describe various functions performed by the OS;
list, discuss and compare various types of OS, and
describe various structures of operating system.
Operating systems have been evolving over the years. We will briey look at this
development of the operating systems with respect to the evolution of the hardware
/ architecture of the computer systems in this section. Since operating systems have
historically been closely tied with the architecture of the computers on which they run,
we will look at successive generations of computers to see what their operating systems
were like. We may not exactly map the operating systems generations to the generations
of the computer, but roughly it provides the idea behind them.
We can roughly divide them into ve distinct generations that are characterized by
10
hardware component technology, software development, and mode of delivery of Operating System:
An Overview
computer services.
16
queue at the computer where they will be run. In this case, the user has no interaction Operating System:
An Overview
with the job during its processing, and the computer’s response time is the turnaround
time-the time from submission of the job until execution is complete, and the results are
ready for return to the person who submitted the job.
19
Introduction to Operating
Systems and Process 1.7 OPERATING SYSTEMS: SOME EXAMPLES
Management
In the earlier section we had seen the types of operating systems. In this section we will
study some popular operating systems.
1.7.1 DOS
DOS (Disk Operating System) was the first widely-installed operating system for
personal computers. It is a master control program that is automatically run when you
start your personal computer (PC). DOS stays in the computer all the time letting you
run a program and manage files. It is a single-user operating system from Microsoft for
the PC. It was the first OS for the PC and is the underlying control program for
Windows 3.1, 95, 98 and ME. Windows NT, 2000 and XP emulate DOS in order to
support existing DOS applications.
1.7.2 UNIX
UNIX operating systems are used in widely-sold workstation products from Sun
Microsystems, Silicon Graphics, IBM, and a number of other companies. The UNIX
environment and the client/server program model were important elements in the
development of the Internet and the reshaping of computing as centered in networks
rather than in individual computers. Linux, a UNIX derivative available in both “free
software” and commercial versions, is increasing in popularity as an alternative to
proprietary operating systems.
UNIX is written in C. Both UNIX and C were developed by AT&T and freely
distributed to government and academic institutions, causing it to be ported to a wider
variety of machine families than any other operating system.As a result, UNIX became
synonymous with “open systems”.
UNIX is made up of the kernel, file system and shell (command line interface). The
major shells are the Bourne shell (original), C shell and Korn shell. The UNIX vocabulary
is exhaustive with more than 600 commands that manipulate data and text in every
way conceivable. Many commands are cryptic, but just as Windows hid the DOS
prompt, the Motif GUI presents a friendlier image to UNIX users. Even with its many
versions, UNIX is widely used in mission critical applications for client/server and
transaction processing systems. The UNIX versions that are widely used are Sun’s
Solaris, Digital’s UNIX, HP’s HP-UX, IBM’s AIX and SCO’s UnixWare. A large
number of IBM mainframes also run UNIX applications, because the UNIX interfaces
were added to MVS and OS/390, which have obtained UNIX branding. Linux, another
variant of UNIX, is also gaining enormous popularity. More details can be studied in
Unit-3 of Block-3 of this course.
1.7.3 WINDOWS
Windows is a personal computer operating system from Microsoft that, together with
some commonly used business applications such as Microsoft Word and Excel, has
become a de facto “standard” for individual users in most corporations as well as in
most homes. Windows contains built-in networking, which allows users to share files
and applications with each other if their PC’s are connected to a network. In large
enterprises, Windows clients are often connected to a network of UNIX and NetWare
servers. The server versions of Windows NT and 2000 are gaining market share,
20
providing a Windows-only solution for both the client and server. Windows is supported Operating System:
An Overview
by Microsoft, the largest software company in the world, as well as the Windows
industry at large, which includes tens of thousands of software developers.
This networking support is the reason why Windows became successful in the rst
place. However, Windows 95, 98, ME, NT, 2000 and XP are complicated operating
environments. Certain combinations of hardware and software running together can
cause problems, and troubleshooting can be daunting. Each new version of Windows
has interface changes that constantly confuse users and keep support people busy, and
Installing Windows applications is problematic too. Microsoft has worked hard to
make Windows 2000 and Windows XP more resilient to installation of problems and
crashes in general. More details on Windows 2000 can be studied in Unit-4 of Block
-3 of this course.
1.7.4 MACINTOSH
The Macintosh (often called “the Mac”), introduced in 1984 byApple Computer, was
the rst widely-sold personal computer with a graphical user interface (GUI). The
Mac was designed to provide users with a natural, intuitively understandable, and, in
general, “user-friendly” computer interface. This includes the mouse, the use of icons
or small visual images to represent objects or actions, the point-and-click and click-
and-drag actions, and a number of window operation ideas. Microsoft was successful
in adapting user interface concepts rst made popular by the Mac in its rst Windows
operating system. The primary disadvantage of the Mac is that there are fewer Mac
applications on the market than for Windows. However, all the fundamental applications
are available, and the Macintosh is a perfectly useful machine for almost everybody.
Data compatibility between Windows and Mac is an issue, although it is often overblown
and readily solved.
The Macintosh has its own operating system, Mac OS which, in its latest version is
called Mac OS X. Originally built on Motorola’s 68000 series microprocessors, Mac
versions today are powered by the PowerPC microprocessor, which was developed
jointly by Apple, Motorola, and IBM. While Mac users represent only about 5% of
the total numbers of personal computer users, Macs are highly popular and almost a
cultural necessity among graphic designers and online visual artists and the companies
they work for.
In this section we will discuss some services of the operating system used by its users.
Users of operating system can be divided into two broad classes: command language
users and system call users. Command language users are those who can interact with
operating systems using the commands. On the other hand system call users invoke
services of the operating system by means of run time system calls during the execution
of programs.
More can be read from Block-4 of MCS-203 Operating Systems.
1.8 FUNCTIONS OF OS
The main functions of an operating system are as follows:
Process Management
Memory Management
Secondary Storage Management
I/O Management
File Management
Protection
Networking Management
Command Interpretation.
The operating system is responsible for the following activities in connection with disk
management:
Free space management
Storage allocation
Disk scheduling.
More details can be studied in Unit-3 of Block-2 of this course.
The operating system implements the abstract concept of the file by managing mass
storage device, such as types and disks.Also files are normally organised into directories
to ease their use. Finally, when multiple users have access to files, it may be desirable
to control by whom and in what ways files may be accessed.
The operating system is responsible for the following activities in connection to the file
management:
The creation and deletion of files.
1.8.6 Protection
The various processes in an operating system must be protected from each other’s
activities. For that purpose, various mechanisms which can be used to ensure that the
files, memory segment, CPU and other resources can be operated on only by those
processes that have gained proper authorisation from the operating system.
For example, memory addressing hardware ensures that a process can only execute
within its own address space. The timer ensures that no process can gain control of the
CPU without relinquishing it. Finally, no process is allowed to do its own I/O, to
protect the integrity of the various peripheral devices. Protection refers to a mechanism
for controlling the access of programs, processes, or users to the resources defined by
a computer controls to be imposed, together with some means of enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can often prevent
contamination of a healthy subsystem by a subsystem that is malfunctioning. An
24
unprotected resource cannot defend against use (or misuse) by an unauthorised or Operating System:
An Overview
incompetent user. More on protection and security can be studied in Unit-4 of
Block-3.
1.8.7 Networking
A distributed system is a collection of processors that do not share memory or a clock.
Instead, each processor has its own local memory, and the processors communicate
with each other through various communication lines, such as high speed buses or
telephone lines. Distributed systems vary in size and function. They may involve
microprocessors, workstations, minicomputers, and large general purpose computer
systems.
The processors in the system are connected through a communication network, which
can be configured in the number of different ways. The network may be fully or partially
connected. The communication network design must consider routing and connection
strategies and the problems of connection and security.
A distributed system provides the user with access to the various resources the system
maintains.Access to a shared resource allows computation speed-up, data availability,
and reliability.
1.9 SUMMARY
This Unit presented the principle operation of an operating system. In this unit we had
briefly described about the history, the generations and the types of operating systems.
An operating system is a program that acts as an interface between a user of a computer
and the computer hardware. The purpose of an operating system is to provide an
environment in which a user may execute programs. The primary goal of an operating
system is to make the computer convenient to use. And the secondary goal is to use
the hardware in an efficient manner.
Operating systems may be classified by both how many tasks they can perform
“simultaneously” and by how many users can be using the system “simultaneously”.
That is: single-user or multi-user and single-task or multi-tasking. A multi-user
system must clearly be multi-tasking. In the next unit we will discuss the concept of
processes and their management by the OS.
1.10 SOLUTIONS/ANSWERS
Check Your Progress 1
1) The most important type of system software is the operating system.An operating
system has three main responsibilities:
Perform basic tasks, such as recognising input from the keyboard, sending
output to the display screen, keeping track of files and directories on the disk,
and controlling peripheral devices such as disk drives and printers.
Ensure that different programs and users running at the same time do not
interfere with each other.
26
Provide a software platform on top of which other programs (i.e., application Operating System:
An Overview
software) can run.
The first two responsibilities address the need for managing the computer hardware
and the application programs that use the hardware. The third responsibility focuses
on providing an interface between application software and hardware so that application
software can be efficiently developed. Since the operating system is already responsible
for managing the hardware, it should provide a programming interface for application
developers.
Check Your Progress 2
1) The following are the different views of the operating system.
As a scheduler/allocator:
The operating system has resources for which it is in charge.
Responsible for handing them out (and later recovering them).
Resources include CPU, memory, I/O devices, and disk space.
As a virtual machine:
Operating system provides a “new” machine.
– This machine could be the same as the underlying machine.Allows many
users to believe they have an entire piece of hardware to themselves.
– This could implement a different, perhaps more powerful, machine. Or
just a different machine entirely. It may be useful to be able to completely
simulate another machine with your current hardware.
As a multiplexer:
Allows sharing of resources, and provides protection from interference.
Provides for a level of cooperation between users.
Economic reasons: we have to take turns.
2) Batch Processing: This strategy involves reading a series of jobs (called a batch)
into the machine and then executing the programs for each job in the batch. This
approach does not allow users to interact with programs while they operate.
Time Sharing: This strategy supports multiple interactive users. Rather than
preparing a job for execution ahead of time, users establish an interactive session
with the computer and then provide commands, programs and data as they are
needed during the session.
Real Time: This strategy supports real-time and process control systems. These
are the types of systems which control satellites, robots, and air-traffic control.
The dedicated strategy must guarantee certain response times for particular
computing tasks or the application is useless.
Check Your Progress 3
1) The advantage of having a multi-user operating system is that normally the hardware
is very expensive, and it lets a number of users share this expensive resource. This 27
Introduction to Operating means the cost is divided amongst the users. It also makes better use of the
Systems and Process
Management resources. Since the resources are shared, they are more likely to be in use than
sitting idle being unproductive.
One limitation with multi-user computer systems is that as more users access it,
the performance becomes slower and slower. Another limitation is the cost of
hardware, as a multi-user operating system requires a lot of disk space and memory.
In addition, the actual software for multi-user operating systems tend to cost more
than single-user operating systems.
2) A multi-tasking operating system provides the ability to run more than one program
at once. For example, a user could be running a word processing package, printing
a document, copying files to the floppy disk and backing up selected files to a tape
unit. Each of these tasks the user is doing appears to be running at the same time.
A multi-tasking operating system has the advantage of letting the user run more
than one task at once, so this leads to increased productivity. The disadvantage is
that more programs that are run by the user, the more memory is required.
3) An operating system for a security control system (such as a home alarm system)
would consist of a number of programs. One of these programs would gain control
of the computer system when it is powered on, and initialise the system. The first
task of this initialise program would be to reset (and probably test) the hardware
sensors and alarms. Once the hardware initialisation was complete, the operating
system would enter a continual monitoring routine of all the input sensors. If the
state of any input sensor is changed, it would branch to an alarm generation routine.
28
Operating System:
UNIT 2 PROCESSES An Overview
Structure
2.1 Introduction
2.2 Objectives
2.3 The Concept of Process
2.3.1 Implicit and Explicit Tasking
2.3.2 Processes Relationship
2.3.3 Process States
2.3.4 Implementation of Processes
2.3.5 Process Hierarchy
2.3.6 Threads
2.3.7 Levels of Threads
2.4 System Calls for Process Management
2.5 Process Scheduling
2.5.1 Scheduling Objectives
2.5.2 Types of Schedulers
2.5.3 Scheduling Criteria
2.6 Scheduling Algorithms
2.6.1 First Come First Serve (FCFS)
2.6.2 Shortest Job First (SJF)
2.6.3 Round Robin (RR)
2.6.4 Shortest Remaining Time Next (SRTN)
2.6.5 Priority Based Scheduling or Event Driven (ED) Scheduling
2.7 Performance Evaluation of the Scheduling Algorithms
2.8 Summary
2.9 Solutions/Answers
2.10 Further Readings
2.1 INTRODUCTION
In the earlier unit we have studied the overview and the functions of an operating
system. In this unit we will have detailed discussion on the processes and their
management by the operating system. The other resource management features of
operating systems will be discussed in the subsequent units.
The CPU executes a large number of programs. While its main concern is the execution
of user programs, the CPU is also needed for other system activities. These activities
are called processes. A process is a program in execution. Typically, a batch job is a
process. A time-shared user program is a process. A system task, such as spooling, is
also a process. For now, a process may be considered as a job or a time-shared
program, but the concept is actually more general.
In general, a process will need certain resources such as CPU time, memory, files, I/O
devices, etc., to accomplish its task. These resources are given to the process when it 29
Introduction to Operating is created. In addition to the various physical and logical resources that a process
Systems and Process
Management obtains when it is created, some initialisation data (input) may be passed along. For
example, a process whose function is to display the status of a file, say F1, on the
screen, will get the name of the file F1 as an input and execute the appropriate program
to obtain the desired information.
We emphasize that a program by itself is not a process; a program is a passive entity,
while a process is an active entity. It is known that two processes may be associated
with the same program; they are nevertheless considered two separate execution
sequences.
A process is the unit of work in a system. Such a system consists of a collection of
processes, some of which are operating system processes, those that execute system
code, and the rest being user processes, those that execute user code. All of those
processes can potentially execute concurrently.
The operating system is responsible for the following activities in connection with
processes managed.
The creation and deletion of both user and system processes;
The suspension is resumption of processes;
The provision of mechanisms for process synchronization, and
The provision of mechanisms for deadlock handling.
We will learn the operating system view of the processes, types of schedulers, different
types of scheduling algorithms, in the subsequent sections of this unit.
2.2 OBJECTIVES
After going through this unit, you should be able to:
understand the concepts of process, various states in the process and their
scheduling;
define a process control block;
classify different types of schedulers;
understand various types of scheduling algorithms, and
compare the performance evaluation of the scheduling algorithms.
Distributed computing network server can handle multiple concurrent client sessions
by dedicating an individual task to each active client session.
32
2.2.2 Processes Relationship Processes
Transition 1 appears when a process discovers that it cannot continue. In order to get
into blocked state, some systems must execute a system call block. In other systems,
when a process reads from a pipe or special file and there is no input available, the
process is automatically blocked.
Transition 2 and 3 are caused by the process scheduler, a part of the operating system.
Transition 2 occurs when the scheduler decides that the running process has run long
enough, and it is time to let another process have some CPU time. Transition 3 occurs
when all other processes have had their share and it is time for the first process to run
again.
Transition 4 appears when the external event for which a process was waiting was
happened. If no other process is running at that instant, transition 3 will be triggered
immediately, and the process will start running. Otherwise it may have to wait in ready
state for a little while until the CPU is available.
Using the process model, it becomes easier to think about what is going on inside the
system. There are many processes like user processes, disk processes, terminal
processes, and so on, which may be blocked when they are waiting for something to
happen. When the disk block has been read or the character typed, the process waiting
for it is unblocked and is ready to run again.
To implement the process model, the operating system maintains a table, an array of
structures, called the process table or process control block (PCB) or Switch frame.
Each entry identifies a process with information such as process state, its program
counter, stack pointer, memory allocation, the status of its open files, its accounting
and scheduling information. In other words, it must contain everything about the process
that must be saved when the process is switched from the running state to the ready
state so that it can be restarted later as if it had never been stopped. The following is
the information stored in a PCB.
Process state, which may be new, ready, running, waiting or halted;
Process number, each process is identified by its process number, called process
ID;
Program counter, which indicates the address of the next instruction to be executed
for this process;
CPU registers, which vary in number and type, depending on the concrete
microprocessor architecture;
Memory management information, which include base and bounds registers or
page table;
I/O status information, composed I/O requests, I/O devices allocated to this process,
a list of open files and so on;
Processor scheduling information, which includes process priority, pointers to
scheduling queues and any other scheduling parameters;
List of open files.
A process structure block is shown in Figure 2.
Context Switch
A context switch (also sometimes referred to as a process switch or a task switch)
is the switching of the CPU (central processing unit) from one process or to another. 35
Introduction to Operating A context is the contents of a CPU’s registers and program counter at any point in
Systems and Process
Management time.
A context switch is sometimes described as the kernel suspending execution of one
process on the CPU and resuming execution of some other process that had previously
been suspended.
Context Switch: Steps
In a context switch, the state of the first process must be saved somehow, so that,
when the scheduler gets back to the execution of the first process, it can restore this
state and continue normally.
The state of the process includes all the registers that the process may be using, especially
the program counter, plus any other operating system specific data that may be necessary.
Often, all the data that is necessary for state is stored in one data structure, called a
process control block (PCB). Now, in order to switch processes, the PCB for the first
process must be created and saved. The PCBs are sometimes stored upon a per-
process stack in the kernel memory, or there may be some specific operating system
defined data structure for this information.
Let us understand with the help of an example. Suppose if two processes A and B are
in ready queue. If CPU is executing Process A and Process B is in wait state. If an
interrupt occurs for ProcessA, the operating system suspends the execution of the first
process, and stores the current information of Process A in its PCB and context to the
second process namely Process B. In doing so, the program counter from the PCB of
Process B is loaded, and thus execution can continue with the new process. The
switching between two processes, ProcessAand Process B is illustrated in the Figure
3 given below:
Old or primitive operating system like MS-DOS are not multiprogrammed so when
one process starts another, the first process is automatically blocked and waits until
the second is finished.
2.2.6 Threads
Threads, sometimes called lightweight processes (LWPs) are independently scheduled
parts of a single program. We say that a task is multithreaded if it is composed of
several independent sub-processes which do work on common data, and if each of
those pieces could (at least in principle) run in parallel.
If we write a program which uses threads – there is only one program, one executable
file, one task in the normal sense. Threads simply enable us to split up that program 37
Introduction to Operating into logically separate pieces, and have the pieces run independently of one another,
Systems and Process
Management until they need to communicate. In a sense, threads are a further level of object
orientation for multitasking systems. They allow certain functions to be executed in
parallel with others.
On a truly parallel computer (several CPUs) we might imagine parts of a program
(different subroutines) running on quite different processors, until they need to
communicate. When one part of the program needs to send data to the other part, the
two independent pieces must be synchronized, or be made to wait for one another.
But what is the point of this? We can always run independent procedures in a program
as separate programs, using the process mechanisms we have already introduced.
They could communicate using normal interprocesses communication. Why introduce
another new concept? Why do we need threads?
The point is that threads are cheaper than normal processes, and that they can be
scheduled for execution in a user-dependent way, with less overhead. Threads are
cheaper than a whole process because they do not have a full set of resources each.
Whereas the process control block for a heavyweight process is large and costly to
context switch, the PCBs for threads are much smaller, since each thread has only a
stack and some registers to manage. It has no open file lists or resource lists, no
accounting structures to update.All of these resources are shared by all threads within
the process. Threads can be assigned priorities – a higher priority thread will get put to
the front of the queue. Let’s define heavy and lightweight processes with the help of a
table.
In modern operating systems, there are two levels at which threads operate: system or
kernel threads and user level threads. If the kernel itself is multithreaded, the scheduler
assigns CPU time on a thread basis rather than on a process basis. A kernel level
thread behaves like a virtual CPU, or a power-point to which user-processes can
connect in order to get computing power. The kernel has as many system level threads
as it has CPUs and each of these must be shared between all of the user-threads on the
system. In other words, the maximum number of user level threads which can be active
at any one time is equal to the number of system level threads, which in turn is equal to
the number of CPUs on the system.
Since threads work “inside” a single task, the normal process scheduler cannot normally
tell which thread to run and which not to run – that is up to the program. When the
kernel schedules a process for execution, it must then find out from that process which
is the next thread it must execute. If the program is lucky enough to have more than one
processor available, then several threads can be scheduled at the same time.
Some important implementations of threads are:
The Mach System / OSF1 (user and system level)
Solaris 1 (user level)
Solaris 2 (user and system level)
OS/2 (system level only)
NT threads (user and system level)
IRIX threads
POSIX standardized user threads interface.
Check Your Progress 1
1) Explain the difference between a process and a thread with some examples.
........................................................................................................................
........................................................................................................................
........................................................................................................................
2) Identify the different states a live process may occupy and show how a process
moves between these states.
........................................................................................................................
........................................................................................................................
........................................................................................................................
3) Define what is meant by a context switch. Explain the reason many systems use
two levels of scheduling.
........................................................................................................................
........................................................................................................................
........................................................................................................................ 39
Introduction to Operating
Systems and Process 2.3 SYSTEM CALLS FOR PROCESS
Management
MANAGEMENT
In this section we will discuss system calls typically provided by the kernels of
multiprogramming operating systems for process management. System calls provide
the interface between a process and the operating system. These system calls are the
routine services of the operating system.As an example of how system calls are used,
consider writing a simple program to read data from one file and to copy them to
another file. There are two names of two different files in which one input file and the
other is the output file. One approach is for the program to ask the user for the names
of the two files. In an interactive system, this approach will require a sequence of
system calls, first to write a prompting message on the screen and then to read from the
keyboard the character that the two files have. Once the two file names are obtained
the program must open the input file and create the output file. Each of these operations
requires another system call and may encounter possible error conditions. When the
program tries to open the input file, it may find that no file of that name exists or that the
file is protected against access. In these cases the program should print the message on
the console and then terminate abnormally which require another system call. If the
input file exists then we must create the new output file. We may find an output file with
the same name. This situation may cause the program to abort or we may delete the
existing file and create a new one. After opening both files, we may enter a loop that
reads from input file and writes to output file. Each read and write must return status
information regarding various possible error conditions. Finally, after the entire file is
copied the program may close both files. Examples of some operating system calls are:
Create: In response to the create call the operating system creates a new process
with the specified or default attributes and identifier. Some of the parameters definable
at the process creation time include:
level of privilege, such as system or user
priority
size and memory requirements
maximum data area and/or stack size
memory protection information and access rights
other system dependent data.
Delete: The delete service is also called destroy, terminate or exit. Its execution causes
the operating system to destroy the designated process and remove it from the system.
Abort: It is used to terminate the process forcibly.Although a process could conceivably
abort itself, the most frequent use of this call is for involuntary terminations, such as
removal of malfunctioning process from the system.
Fork/Join:Another method of process creation and termination is by means of FORK/
JOIN pair, originally introduced as primitives for multiprocessor system. The FORK
operations are used to split a sequence of instruction into two concurrently executable
sequences. JOIN is used to merge the two sequences of code divided by the FORK
40 and it is available to a parent process for synchronization with a child.
Suspend: The suspend system call is also called BLOCK in some systems. The Processes
designated process is suspended indenitely and placed in the suspend state. A
process may be suspended itself or another process when authorized to do so.
Resume: The resume system call is also called WAKEUP in some systems. This call
resumes the target process, which is presumably suspended. Obviously a suspended
process can not resume itself because a process must be running to have its operating
system call processed. So a suspended process depends on a partner process to issue
the resume.
Delay: The system call delay is also known as SLEEP. The target process is suspended
for the duration of the specied time period. The time may be expressed in terms of
system clock ticks that are system dependent and not portable or in standard time
units such as seconds and minutes. A process may delay itself or optionally, delay
some other process.
Get_Attributes: It is an enquiry to which the operating system responds by providing
the current values of the process attributes, or their specied subset, from the PCB.
Change Priority: It is an instance of a more general SET-PROCESS-ATTRIBUTES
system call. Obviously, this call is not implemented in systems where process priority is
static.
Check Your Progress 2
1) Distinguish between a foreground and a background process in UNIX.
........................................................................................................................
........................................................................................................................
........................................................................................................................
2) Identify the information which must be maintained by the operating system for
each live process.
........................................................................................................................
........................................................................................................................
........................................................................................................................
from running to ready without the process requesting it. An OS implementing this
algorithms switches to the processing of a new request before completing the processing
of the current request. The preempted request is put back into the list of the pending
requests. Its servicing would be resumed sometime in the future when it is scheduled
again. Preemptive scheduling is more useful in high priority process which requires
immediate response. For example, in Real time system the consequence of missing
one interrupt could be dangerous.
Round Robin scheduling, priority based scheduling or event driven scheduling and
SRTN are considered to be the preemptive scheduling algorithms.
First come First Served (FCFS) and Shortest Job First (SJF), are considered to be
the non-preemptive scheduling algorithms.
The decision whether to schedule preemptive or not depends on the environment and
the type of application most likely to be supported by a given operating system.
Under some circumstances, CPU utilisation can also suffer. In the situation described
above, once a CPU-bound process does issue an I/O request, the CPU can return to
process all the I/O-bound processes. If their processing completes before the CPU-
bound process’s I/O completes, the CPU sits idle. So with no preemption, component
utilisation and the system throughput rate may be quite low.
Example:
Calculate the turn around time, waiting time, average turnaround time, average waiting
time, throughput and processor utilization for the given set of processes that arrive at a
given arrive time shown in the table, with the length of processing time given in
milliseconds:
45
Introduction to Operating
Systems and Process
Management
If the processes arrive as per the arrival time, the Gantt chart will be
Note: If all the processes arrive at time 0, then the order of scheduling will be
P3, P5, P1, P2 and P4. By taking arrival time as 0 for all the processes, calculate
the above parameters and see the difference in the values.
Using SJF scheduling because the shortest length of process will first get execution, the
Gantt chart will be:
Because the shortest processing time is of the process P4, then process P1 and then
46
P3 and Process P2. The waiting time for process P1 is 3 ms, for process P2 is 16 ms, Processes
for process P3 is 9 ms and for the process P4 is 0 ms as –
47
Introduction to Operating
Systems and Process
Management
At time 0, only process P1 has entered the system, so it is the process that executes.
At time 1, process P2 arrives. At that time, process P1 has 4 time units left to execute
At this juncture process 2’s processing time is less compared to the P1 left out time (4
units). So P2 starts executing at time 1. At time 2, process P3 enters the system with
the processing time 5 units. Process P2 continues executing as it has the minimum
number of time units when compared with P1 and P3.At time 3, process P2 terminates
and process P4 enters the system. Of the processes P1, P3 and P4, P4 has the smallest
remaining execution time so it starts executing. When process P1 terminates at time
10, process P3 executes. The Gantt chart is shown below:
Turnaround time for each process can be computed by subtracting the time it terminated
from the arrival time.
Turn around Time = t(Process Completed)– t(Process Submitted)
The turnaround time for each of the processes is:
P1: 10 – 0 = 10
P2: 3–1= 2
P3: 15 – 2 = 13
P4: 6–3= 3
48
The average turnaround time is (10+2+13+3) / 4 = 7 Processes
The waiting time can be computed by subtracting processing time from turnaround
time, yielding the following 4 results for the processes as
P1: 10 – 5 = 5
P2: 2 – 2 = 0
P3: 13 – 5 = 8
P4: 3 – 3 = 0
The average waiting time = (5+0+8+0) / 4 = 3.25milliseconds
Four jobs executed in 15 time units, so throughput is 15 / 4 = 3.75 time units/job.
Using priority scheduling we would schedule these processes according to the following
Gantt chart:
49
Introduction to Operating Average turn around time = (1+6+16+18+19) / 5 = 60/5 = 12
Systems and Process
Management Average waiting time = (6+0+16+18+1) / 5 = 8.2
Throughput = 5/19 = 0.26
Processor utilization = (30/30) * 100 = 100%
Priorities can be defined either internally or externally. Internally defined priorities use
one measurable quantity or quantities to complete the priority of a process.
Maximize throughput.
For example, assume that we have the following five processes arrived at time 0, in the
order given with the length of CPU time given in milliseconds.
First consider the FCFS scheduling algorithm for the set of processes.
For FCFS scheduling the Gantt chart will be:
Now consider the SJF scheduling, the Gantt chart will be:
50
Processes
Check Your Progress 3
1) Explainthe difference between voluntary or co-operative scheduling and preemptive
scheduling. Give two examples of preemptive and of non-preemptive scheduling
algorithms.
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
3) Draw the Gantt chart for the FCFS policy, considering the following set of processes
that arrive at time 0, with the length of CPU time given in milliseconds.Also calculate
theAverage Waiting Time. 51
Introduction to Operating ........................................................................................................................
Systems and Process
Management
........................................................................................................................
........................................................................................................................
........................................................................................................................
4) For the given five processes arriving at time 0, in the order with the length of CPU
time in milliseconds:
Consider the FCFS, SJF and RR (time slice= 10 milliseconds) scheduling algorithms
for the above set of process which algorithm would give the minimum average
waiting time?
........................................................................................................................
........................................................................................................................
........................................................................................................................
2.7 SUMMARY
A process is an instance of a program in execution. A process can be defined by the
system or process is an important concept in modem operating system. Processes
provide a suitable means for informing the operating system about independent activities
that may be scheduled for concurrent execution. Each process is represented by a
process control block (PCB). Several PCB’s can be linked together to form a queue
of waiting processes. The selection and allocation of processes is done by a scheduler.
There are several scheduling algorithms. We have discussed FCFS, SJF, RR, SJRT
and priority algorithms along with the performance evaluation.
2.8 SOLUTIONS/ANSWERS
Check Your Progress 1
52 1) A process is an instance of an executing program and its data. For example, if you
were editing 3 files simultaneously, you would have 3 processes, even though they Processes
might be sharing the same code.
A thread is often called a lightweight process. It is a “child” and has all the state
information associated with a child process but does not run in a separate address
space i.e., doesn’t have its own memory. Since they share memory, each thread
has access to all variables and any change made by one thread impacts all the
others. Threads are useful in multi-client servers where each thread can service a
separate client connection. In some stand-alone processes where you can split the
processing such that one thread can continue processing while another is blocked
e.g., waiting for an I/O request or a timer thread can implement a regular repaint of
a graphics canvas.
This is how simple animations are accomplished in systems like NT and
Windows95. Unix is a single-threaded operating system and can’t do this at the
operating system level although some applications e.g., Java can support it via
software.
2) A process can be in one of the following states:
Ready i.e., in the scheduling queue waiting it’s turn for the CPU
Running i.e., currently occupying the CPU
2) Some processes are more critical to overall operation than others e.g., kernel
activity. I/O bound processes, which seldom use a full quantum, need a boost
over compute bound tasks. This priority can be implemented by any combination
of higher positioning in the ready queue for higher priority more frequent occurrences
in the ready list for different priorities different quantum lengths for different
priorities. Priority scheduling is liable to starvation. Asuccession of high priority
processes may result in a low priority process never reaching the head of the
ready list. Priority is normal a combination of static priority (set by the user or the
class of the process) and a dynamic adjustment based on it’s history (e.g., how
long it has been waiting). NT and Linux also boost priority when processes return
from waiting for I/O.
3) If the process arrives in the order P1, P2 and P3, then the Gantt chart will be as:
From the above calculations of average waiting time we found that SJF policy results
in less than one half of the average waiting time obtained from FCFS, while Round
Robin gives intermediate result.
56
Processes
UNIT 3 INTERPROCESS COMMUNICATION
AND SYNCHRONIZATION
Structure
3.1 Introduction
3.2 Objectives
3.3 Interprocess Communication
3.3.1 Shared-Memory System
3.3.2 Message-Passing System
3.4 Interprocess Synchronization
3.4.1 Serialization
3.4.2 Mutexes: Mutual Exclusion
3.4.3 Critical Sections: The Mutex Solution
3.4.4 Dekker’s Solution for Mutual Exclusion
3.4.5 Bakery’s Algorithm
3.5 Semaphores
3.6 Classical Problems in Concurrent Programming
3.6.1 Producers/Consumers Problem
3.6.2 Readers and Writers Problem
3.6.3 Dining- Philosophers Problem
3.6.4 Sleeping Barber Problem
3.7 Locks
3.8 Monitors and Condition Variables
3.9 Summary
3.10 Solution/Answers
3.11 Further Readings
3.1 INTRODUCTION
In the earlier unit we have studied the concept of processes. In addition to process
scheduling, another important responsibility of the operating system is process
synchronization. Synchronization involves the orderly sharing of system resources by
processes.
Concurrency specifies two or more sequential programs (a sequential program specifies
sequential execution of a list of statements) that may be executed concurrently as a
parallel process. For example, an airline reservation system that involves processing
transactions from many terminals has a natural specification as a concurrent program in
which each terminal is controlled by its own sequential process. Even when processes
are not executed simultaneously, it is often easier to structure as a collection of
cooperating sequential processes rather than as a single sequential program.
A simple batch operating system can be viewed as 3 processes-a reader process, an
executor process and a printer process. The reader reads cards from card reader and 57
Introduction to Operating places card images in an input buffer. The executor process reads card images from
Systems and Process
Management input buffer and performs the specified computation and store the result in an output
buffer. The printer process retrieves the data from the output buffer and writes them to
a printer. Concurrent processing is the basis of operating system which supports
multiprogramming.
Share Data: A segment of memory must be available to both the processes. (Most
memory is locked to a single process).
Waiting: Some processes wait for other processes to give a signal before continuing.
This is an issue of synchronization.
In order to cooperate concurrently executing processes must communicate and
synchronize. Interprocess communication is based on the use of shared variables
(variables that can be referenced by more than one process) or message passing.
In this unit, let us study the concept of interprocess communication and synchronization,
need of semaphores, classical problems in concurrent processing, critical regions,
monitors and message passing.
3.2 OBJECTIVES
After studying this unit, you should be able to:
identify the significance of interprocess communication and synchronization;
describe the two ways of interprocess communication namely shared memory
and message passing;
discuss the usage of semaphores, locks and monitors in interprocess and
synchronization, and
solve classical problems in concurrent programming.
58
Interprocess Communication
3.3 INTERPROCESS COMMUNICATION and Synchronization
Direct Communication
In direct communication, each process that wants to send or receive a message must
explicitly name the recipient or sender of the communication. In this case, the send and
receive primitives are defined as follows:
send (P, message) To send a message to process P
receive (Q, message) To receive a message from process Q
This scheme shows the symmetry in addressing, i.e., both the sender and the receiver
have to name one another in order to communicate. In contrast to this, asymmetry in
addressing can be used, i.e., only the sender has to name the recipient; the recipient is
not required to name the sender. So the send and receive primitives can be defined as
follows:
60
send (P, message) To send a message to the process P Interprocess Communication
and Synchronization
receive (id, message) To receive a message from any process; id is set to
the name of the process with whom the
communication has taken place.
Indirect Communication
With indirect communication, the messages are sent to, and received from a mailbox.
A mailbox can be abstractly viewed as an object into which messages may be placed
and from which messages may be removed by processes. In order to distinguish one
from the other, each mailbox owns a unique identification.Aprocess may communicate
with some other process by a number of different mailboxes. The send and receive
primitives are defined as follows:
send (A, message) To send a message to the mailbox A
receive (A, message) To receive a message from the mailbox A
Mailboxes may be owned either by a process or by the system. If the mailbox is
owned by a process, then we distinguish between the owner who can only receive
from this mailbox and user who can only send message to the mailbox. When a process
that owns a mailbox terminates, its mailbox disappears. Any process that sends a
message to this mailbox must be notified in the form of an exception that the mailbox
no longer exists.
If the mailbox is owned by the operating system, then it has an existence of its own,
i.e., it is independent and not attached to any particular process. The operating system
provides a mechanism that allows a process to: a) create a new mailbox, b) send and
receive message through the mailbox and c) destroy a mailbox. Since all processes
with access rights to a mailbox may terminate, a mailbox may no longer be accessible
by any process after some time. In this case, the operating system should reclaim
whatever space was used for the mailbox.
Capacity Link
A link has some capacity that determines the number of messages that can temporarily
reside in it. This propriety can be viewed as a queue of messages attached to the link.
Basically there are three ways through which such a queue can be implemented:
Zero capacity: This link has a message queue length of zero, i.e., no message can
wait in it. The sender must wait until the recipient receives the message. The two
processes must be synchronized for a message transfer to take place. The zero-capacity
link is referred to as a message-passing system without buffering.
Bounded capacity: This link has a limited message queue length of n, i.e., at most n
messages can reside in it. If a new message is sent, and the queue is not full, it is placed
in the queue either by copying the message or by keeping a pointer to the message and
the sender should continue execution without waiting. Otherwise, the sender must be
delayed until space is available in the queue.
Unbounded capacity: This queue has potentially infinite length, i.e., any number of
messages can wait in it. That is why the sender is never delayed.
Bounded and unbounded capacity link provide message-passing system with automatic
buffering. 61
Introduction to Operating Messages
Systems and Process
Management
Messages sent by a process may be one of three varieties: a) fixed-sized, b) variable-
sized and c) typed messages. If only fixed-sized messages can be sent, the physical
implementation is straightforward. However, this makes the task of programming more
difficult. On the other hand, variable-size messages require more complex physical
implementation, but the programming becomes simpler.Typed messages, i.e., associating
a type with each mailbox, are applicable only to indirect communication. The messages
that can be sent to, and received from a mailbox are restricted to the designated type.
3.4.1 Serialization
The key idea in process synchronization is serialization. This means that we have to
go to some pains to undo the work we have put into making an operating system
perform several tasks in parallel.As we mentioned, in the case of print queues, parallelism
is not always appropriate.
Synchronization is a large and difficult topic, so we shall only undertake to describe the
problem and some of the principles involved here.
There are essentially two strategies to serializing processes in a multitasking environment.
The scheduler can be disabled for a short period of time, to prevent control
being given to another process during a critical action like modifying shared
data. This method is very inefficient on multiprocessor machines, since all
other processors have to be halted every time one wishes to execute a critical
section.
A protocol can be introduced which all programs sharing data must obey.
The protocol ensures that processes have to queue up to gain access to shared
data. Processes which ignore the protocol ignore it at their own peril (and the
peril of the remainder of the system!). This method works on multiprocessor
machines also, though it is more difficult to visualize. The responsibility of
serializing important operations falls on programmers. The OS cannot impose
any restrictions on silly behaviour-it can only provide tools and mechanisms
to assist the solution of the problem.
Where more processes are involved, some modifications are necessary to this algorithm.
The key to serialization here is that, if a second process tries to obtain the mutex, when
64 another already has it, it will get caught in a loop, which does not terminate until the
other process has released the mutex. This solution is said to involve busy waiting-i.e., Interprocess Communication
and Synchronization
the program actively executes an empty loop, wasting CPU cycles, rather than moving
the process out of the scheduling queue. This is also called a spin lock, since the
system ‘spins’ on the loop while waiting. Let us see another algorithm which handles
critical section problem for n processes.
It should be noted that Dekker’s solution does rely on one very simple assumption
about the underlying hardware; it assumes that if two processes attempt to write two
different values in the same memory location at the same time, one or the other value
will be stored and not some mixture of the two. This is called the atomic update 65
Introduction to Operating assumption. The atomically updatable unit of memory varies considerably from one
Systems and Process
Management system to another; on some machines, any update of a word in memory is atomic, but
an attempt to update a byte is not atomic, while on others, updating a byte is atomic
while words are updated by a sequence of byte updates.
If processes P i and P j receive the same number, if i < j , then P i is served first;
else P j is served first.
The numbering scheme always generates numbers in increasing order of
enumeration; i.e., 1,2,3,3,3,3,4,5...
Notation <= lexicographical order (ticket #, process id #)
o (a,b) < (c,d) if a < c or if a = c and b < d
o max(a , . . . , a ) is a number, k , such that k >= a for i = 0, . . . , n - 1
0 n-1i
Shared data
boolean choosing[n]; //initialise all to false
int number[n]; //initialise all to 0
Data structures are initialized to false and 0, respectively.
The algorithm is as follows:
do
{
choosing[i] = true;
number[i] = max(number[0], number[1], ...,number[n-1]) + 1;
choosing[i] = false;
for(int j = 0; j < n; j++)
{
while (choosing[j]== true)
{
/*do nothing*/
}
while ((number[j]!=0) &&
(number[j],j)< (number[i],i))
// see Reference point
{
/*do nothing*/
}
}
66
do critical section Interprocess Communication
and Synchronization
number[i] = 0;
do remainder section
} while (true)
In the next section we will study how the semaphores provides a much more organize
approach of synchronization of processes.
3.4 SEMAPHORES
Semaphores provide a much more organized approach to controlling the interaction of
multiple processes than would be available if each user had to solve all interprocess
communications using simple variables, but more organization is possible. In a sense,
semaphores are something like the goto statement in early programming languages;
they can be used to solve a variety of problems, but they impose little structure on the
solution and the results can be hard to understand without the aid of numerous comments.
Just as there have been numerous control structures devised in sequential programs to
reduce or even eliminate the need for goto statements, numerous specialized concurrent
control structures have been developed which reduce or eliminate the need for
semaphores.
Definition: The effective synchronization tools often used to realize mutual exclusion
in more complex systems are semaphores. A semaphore S is an integer variable which
can be accessed only through two standard atomic operations: wait and signal. The
definition of the wait and signal operations are:
wait(S): while S < 0 do skip;
S := S – 1;
signal(S): S := S + 1;
or in C language notation we can write it as:
wait(s)
{
while (S<=0)
{
/*do nothing*/
}
S= S-1;
}
signal(S)
{
S = S + 1;
}
It should be noted that the test (S < 0) and modification of the integer value of S which
is S := S – 1 must be executed without interruption. In general, if one process modifies
the integer value of S in the wait and signal operations, no other process can
simultaneously modify that same S value. We briefly explain the usage of semaphores
in the following example:
67
Introduction to Operating Consider two currently running processes: P 1 with a statement S 1 and P 2 with a
Systems and Process
Management statement S2. Suppose that we require that S 2be executed only after S 1has completed.
This scheme can be implemented by letting P 1 and P2 share a common semaphore
synch, initialised to 0, and by inserting the statements:
S1;
signal(synch);
in the process P1 and the statements:
wait(synch);
S2;
in the process P 2.
Since synch is initialised to 0, P2 will execute S2 only after P1 has involved signal
(synch), which is after S1.
The disadvantage of the semaphore definition given above is that it requires busy-
waiting, i.e., while a process is in its critical region, any either process it trying to enter
its critical region must continuously loop in the entry code. It’s clear that through busy-
waiting, CPU cycles are wasted by which some other processes might use those
productively.
To overcome busy-waiting, we modify the definition of the wait and signal operations.
When a process executes the wait operation and finds that the semaphore value is not
positive, the process blocks itself. The block operation places the process into a waiting
state. Using a scheduler the CPU then can be allocated to other processes which are
ready to run.
A process that is blocked, i.e., waiting on a semaphore S, should be restarted by the
execution of a signal operation by some other processes, which changes its state from
blocked to ready. To implement a semaphore under this condition, we define a
semaphore as:
struct semaphore
{
int value;
List *L; //a list of processes
}
Each semaphore has an integer value and a list of processes. When a process must
wait on a semaphore, it is added to this list. A signal operation removes one process
from the list of waiting processes, and awakens it. The semaphore operation can be
now defined as follows:
wait(S)
{
S.value = S.value -1;
if (S.value <0)
{
add this process to S.L;
block;
68 }
} Interprocess Communication
and Synchronization
signal(S)
{
S.value = S.value + 1;
if (S.value <= 0)
{
remove a process P from S.L;
wakeup(P);
}
}
The block operation suspends the process. The wakeup(P) operation resumes the
execution of a blocked process P. These two operations are provided by the operating
system as basic system calls.
One of the almost critical problem concerning implementing semaphore is the situation
where two or more processes are waiting indefinitely for an event that can be only
caused by one of the waiting processes: these processes are said to be deadlocked.
To illustrate this, consider a system consisting of two processes P1 and P2, each accessing
two semaphores S and Q, set to the value one:
P1 P2
wait(S); wait(Q);
wait(Q); wait(S);
... ...
signal(S); signal(Q);
signal(Q); signal(S);
Suppose P1 executes wait(S) and then P2 executes wait(Q). When P1 executes wait(Q),
it must wait until P 2 executes signal(Q). It is no problem, P 2 executes wait(Q), then
signal(Q). Similarly, when P2 executes wait(S), it must wait until P1 executes signal(S).
Thus these signal operations cannot be carried out, P 1 and P2 are deadlocked. It is
clear, that a set of processes are in a deadlocked state, when every process in the set
is waiting for an event that can only be caused by another process in the set.
Shared Data
char item; //could be any data type
char buffer[n];
semaphore full = 0; //counting semaphore
semaphore empty = n; //counting semaphore
semaphore mutex = 1; //binary semaphore
char nextp,nextc;
Producer Process
do
{
produce an item in nextp
wait (empty);
wait (mutex);
add nextp to buffer
signal (mutex);
signal (full);
}
while (true)
Consumer Process
do
{
wait( full );
wait( mutex );
remove an item from buffer to nextc
signal( mutex );
signal( empty );
consume the item in nextc;
}
70
3.5.2 Readers and Writers Problem Interprocess Communication
and Synchronization
The readers/writers problem is one of the classic synchronization problems. It is often
used to compare and contrast synchronization mechanisms. It is also an eminently
used practical problem.Acommon paradigm in concurrent applications is isolation of
shared data such as a variable, buffer, or document and the control of access to that
data. This problem has two types of clients accessing the shared data. The first type,
referred to as readers, only wants to read the shared data. The second type, referred
to as writers, may want to modify the shared data. There is also a designated central
data server or controller. It enforces exclusive write semantics; if a writer is active then
no other writer or reader can be active. The server can support clients that wish to
both read and write. The readers and writers problem is useful for modeling processes
which are competing for a limited shared resource. Let us understand it with the help of
a practical example:
An airline reservation system consists of a huge database with many processes that
read and write the data. Reading information from the database will not cause a problem
since no data is changed. The problem lies in writing information to the database. If no
constraints are put on access to the database, data may change at any moment. By the
time a reading process displays the result of a request for information to the user, the
actual data in the database may have changed. What if, for instance, a process reads
the number of available seats on a flight, finds a value of one, and reports it to the
customer? Before the customer has a chance to make their reservation, another process
makes a reservation for another customer, changing the number of available seats to
zero.
The following is the solution using semaphores:
Semaphores can be used to restrict access to the database under certain conditions. In
this example, semaphores are used to prevent any writing processes from changing
information in the database while other processes are reading from the database.
semaphore mutex = 1; // Controls access to the reader count
semaphore db = 1; // Controls access to the database
int reader_count; // The number of reading processes accessing the data
Reader()
{
while (TRUE) { // loop forever
down(&mutex); // gain access to reader_count
reader_count = reader_count + 1; // increment the reader_count
if (reader_count == 1)
down(&db); //If this is the first process to read the database,
// a down on db is executed to prevent access to the
// database by a writing process
up(&mutex); // allow other processes to access reader_count
read_db(); // read the database
down(&mutex); // gain access to reader_count
reader_count = reader_count - 1; // decrement reader_count
if (reader_count == 0) 71
Introduction to Operating up(&db); // if there are no more processes reading from the
Systems and Process
Management // database, allow writing process to access the data
up(&mutex); // allow other processes to access
reader_countuse_data();
// use the data read from the database (non-critical)
}
Writer()
{
while (TRUE) { // loop forever
create_data(); // create data to enter into database (non-critical)
down(&db); // gain access to the database
write_db(); // write information to the database
up(&db); // release exclusive access to the database
}
3.6 LOCKS
Locks are another synchronization mechanism. A lock has got two atomic
operations (similar to semaphore) to provide mutual exclusion. These two operations
are Acquire and Release. A process will acquire a lock before accessing a shared
variable, and later it will be released. A process locking a variable will run the
following code:
Lock-Acquire();
critical section
Lock-Release();
The difference between a lock and a semaphore is that a lock is released only by the
process that have acquired it earlier.As we discussed above any process can increment
the value of the semaphore. To implement locks, here are some things you should keep
inmind:
· To make Acquire () and Release () atomic
· Build a wait mechanism.
· Making sure that only the process that acquires the lock will release the lock.
Check Your Progress 1
1) What are race conditions? How race conditions occur in Operating Systems?
........................................................................................................................
........................................................................................................................
3.8 SUMMARY
Interprocess communication provides a mechanism to allow process to communicate
78 with other processes. Interprocess communication system is best provided by a message
passing system. Message systems can be defined in many different ways. If there are a Interprocess Communication
and Synchronization
collection of cooperating sequential processes that share some data, mutual exclusion
must be provided. There are different methods for achieving the mutual exclusion.
Different algorithms are available for solving the critical section problem which we
have discussion in this unit. The bakery algorithm is used for solving the n process
critical section problem.
Interprocess synchronization provides the processes to synchronize their activities.
Semaphores can be used to solve synchronization problems. Semaphore can only be
accessed through two atomic operations and can be implemented efficiently. The two
operations are wait and signal.
There are a number of classical synchronization problems which we have discussed in
this unit (such as producer- consumer problem, readers – writers problem and d dining
– philosophers problem). These problems are important mainly because they are
examples for a large class of concurrency-control problems. In the next unit we will
study an important aspect called as “Deadlocks”
3.9 SOLUTIONS/ANSWERS
Check Your Progress 1
1) Processes that are working together share some common storage (main memory,
file etc.) that each process can read and write. When two or more processes are
reading or writing some shared data and the final result depends on who runs
precisely when, are called race conditions. Concurrently executing threads that
share data need to synchronize their operations and processing in order to avoid
race condition on shared data. Only one “customer” thread at a time should be
allowed to examine and update the shared variable.
Race conditions are also possible in Operating Systems. If the ready queue is
implemented as a linked list and if the ready queue is being manipulated during the
handling of an interrupt, then interrupts must be disabled to prevent another interrupt
before the first one completes. If interrupts are not disabled than the linked list
could become corrupt.
2) The most synchronization problem confronting cooperating processes, is to control
the access between the shared resources. Suppose two processes share access
to a file and at least one of these processes can modify the data in this shared area
of memory. That part of the code of each program, where one process is reading
from or writing to a shared memory area, is a critical section of code; because
we must ensure that only one process execute a critical section of code at a time.
The critical section problem is to design a protocol that the processes can use to
coordinate their activities when one wants to enter its critical section of code.
3) This seems like a fairly easy solution. The three smoker processes will make a
cigarette and smoke it. If they can’t make a cigarette, then they will go to sleep.
The agent process will place two items on the table, and wake up the appropriate
smoker, and then go to sleep. All semaphores except lock are initialised to 0.
Lock is initialised to 1, and is a mutex variable.
Here’s the code for the agent process.
do forever {
P( lock ); 79
Introduction to Operating randNum = rand(1,3); // Pick a random number from 1-3
Systems and Process
Management if (randNum == 1) {
// Put tobacco on table
// Put paper on table
V(smoker_match); // Wake up smoker with match
}
else if (randNum == 2) {
// Put tobacco on table
// Put match on table
V(smoker_paper); // Wake up smoker with paper
}
else {
// Put match on table
// Put paper on table
V(smoker_tobacco);
} // Wake up smoker with tobacco
V(lock);
P(agent); // Agent sleeps
} // end forever loop
The following is the code for one of the smokers. The others are analogous.
do forever {
P(smoker_tobacco); // Sleep right away
P(lock);
// Pick up match
// Pick up paper
V(agent);
V(lock);
// Smoke
}
Structure
4.1 Introduction
4.2 Objectives
4.3 Deadlocks
4.4 Characterization of a Deadlock
4.4.1 Mutual Exclusion Condition
4.4.2 Hold and Wait Condition
4.4.3 No-Preemptive Condition
4.4.4 Circular Wait Condition
4.5 Resource Allocation Graph
4.6 Dealing with Deadlock Situations
4.6.1 Deadlock Prevention
4.6.2 Deadlock Avoidance
4.6.3 Deadlock Detection and Recovery
4.7 Summary
4.8 Solutions/Answers
4.9 Further Readings
4.1 INTRODUCTION
In a computer system, we have a finite number of resources to be distributed among a
number of competing processes. These system resources are classified in several types
which may be either physical or logical. Examples of physical resources are Printers,
Tape drivers, Memory space, and CPU cycles. Examples of logical resources are
Files, Semaphores and Monitors. Each resource type can have some identical instances.
A process must request a resource before using it and release the resource after using
it. It is clear that the number of resources requested cannot exceed the total number of
resources available in the system.
In a normal operation, a process may utilize a resource only in the following sequence:
Request: if the request cannot be immediately granted, then the requesting process
must wait until it can get the resource.
Use: the requesting process can operate on the resource.
Release: the process releases the resource after using it.
Examples for request and release of system resources are:
Request and release the device,
Opening and closing file,
Allocating and freeing the memory.
81
Introduction to Operating The operating system is responsible for making sure that the requesting process has
Systems and Process
Management been allocated the resource.Asystem table indicates if each resource is free or allocated,
and if allocated, to which process. If a process requests a resource that is currently
allocated to another process, it can be added to a queue of processes waiting for this
resource.
In some cases, several processes may compete for a fixed number of resources. A
process requests resources and if the resources are not available at that time, it enters
a wait state. It may happen that it will never gain access to the resources, since those
resources are being held by other waiting processes.
For example, assume a system with one tape drive and one plotter. Process P1 requests
the tape drive and process P2 requests the plotter. Both requests are granted. Now PI
requests the plotter (without giving up the tape drive) and P2 requests the tape drive
(without giving up the plotter). Neither request can be granted so both processes enter
a situation called the deadlock situation.
A deadlock is a situation where a group of processes is permanently blocked as a
result of each process having acquired a set of resources needed for its completion
and having to wait for the release of the remaining resources held by others thus making
it impossible for any of the deadlocked processes to proceed.
In the earlier units, we have gone through the concept of process and the need for the
interprocess communication and synchronization. In this unit we will study about the
deadlocks, its characterisation, deadlock avoidance and its recovery.
4.2 OBJECTIVES
After going through this unit, you should be able to:
define a deadlock;
understand the conditions for a deadlock;
know the ways of avoiding deadlocks, and
describe the ways to recover from the deadlock situation.
4.3 DEADLOCKS
Before studying about deadlocks, let us look at the various types of resources. There
are two types of resources namely: Pre-emptable and Non-pre-emptable Resources.
Pre-emptable resources: This resource can be taken away from the process
with no ill effects. Memory is an example of a pre-emptable resource.
Non-Preemptable resource: This resource cannot be taken away from the
process (without causing ill effect). For example, CD resources are not preemptable
at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.
Deadlocks that involve nonpreemptable resources are difficult to deal with. Let us see
how a deadlock occurs.
Definition: A set of processes is in a deadlock state if each process in the set is
82 waiting for an event that can be caused by only another process in the set. In other
words, each member of the set of deadlock processes is waiting for a resource that Deadlocks
can be released only by a deadlock process. None of the processes can run, none of
them can release any resources and none of them can be awakened. It is important to
note that the number of processes and the number and kind of resources possessed
and requested are unimportant.
Let us understand the deadlock situation with the help of examples.
Example 1: The simplest example of deadlock is where process 1 has been allocated
a non-shareable resource A, say, a tap drive, and process 2 has been allocated a non-
sharable resource B, say, a printer. Now, if it turns out that process 1 needs resource
B (printer) to proceed and process 2 needs resource A (the tape drive) to proceed and
these are the only two processes in the system, each has blocked the other and all
useful work in the system stops. This situation is termed as deadlock.
The system is in deadlock state because each process holds a resource being requested
by the other process and neither process is willing to release the resource it holds.
Example 2: Consider a system with three disk drives. Suppose there are three
processes, each is holding one of these three disk drives. If each process now requests
another disk drive, three processes will be in a deadlock state, because each process
is waiting for the event “disk drive is released”, which can only be caused by one of the
other waiting processes. Deadlock state involves processes competing not only for the
same resource type, but also for different resource types.
Deadlocks occur most commonly in multitasking and client/server environments and
are also known as a “Deadly Embrace”. Ideally, the programs that are deadlocked or
the operating system should resolve the deadlock, but this doesn’t always happen.
From the above examples, we have understood the concept of deadlocks. In the
examples we were given some instances, but we will study the necessary conditions
for a deadlock to occur, in the next section.
the printer resource might have two dots to indicate that we don’t really care which is
used, as long as we acquire the resource.
The edges among these nodes represent resource allocation and release. Edges are
directed, and if the edge goes from resource to process node that means the process
has acquired the resource. If the edge goes from process node to resource node that
means the process has requested the resource.
We can use these graphs to determine if a deadline has occurred or may occur. If for
example, all resources have only one instance (all resource node rectangles have one
dot) and the graph is circular, then a deadlock has occurred. If on the other hand some
resources have several instances, then a deadlock may occur. If the graph is not circular,
a deadlock cannot occur (the circular wait condition wouldn’t be satisfied).
The following are the tips which will help you to check the graph easily to predict the
presence of cycles.
If there is a cycle in the graph and each resource has only one instance, then there
is a deadlock. In this case, a cycle is a necessary and sufficient condition for
deadlock.
If there is a cycle in the graph, and each resource has more than one instance,
there may or may not be a deadlock. (A cycle may be broken if some process
outside the cycle has a resource instance that can break the cycle). Therefore, a
cycle in the resource allocation graph is a necessary but not sufficient condition for
deadlock, when multiple resource instances are considered.
Example:
85
Introducti
Introduction
on to Operating
Systems and
and Process
Management
Managem ent
The above graph shown in Figure 3 has a cycle and is not in Deadlock.
(Resource 1 has one instance shown by a star)
(Resource 2 has two instances a and b, shown as two stars)
R1 P1 P1 R2 (a)
R2 (b) P2 P2 R1
If P1 finishes, P2 can get R1 and finish, so there is no Deadlock.
Deadlock Prevention
Deadlock Avoidance
Let’s examine each strategy one by one to evaluate their respective strengths and
weaknesses.
Havender’s Algorithm
Elimination of “Mutual Exclusion” Condition
The mutual exclusion condition must hold for non-shareable resources. That is, several
processes cannot simultaneously share a single resource. This condition is difficult to
eliminate because some resources, such as the tap drive and printer, are inherently
non-shareable. Note that shareable resources like read-only-file do not require mutually
exclusive access and thus cannot be involved in deadlock.
86
Elimination of “Hold and Wait” Condition Deadlocks
There are two possibilities for the elimination of the second condition. The first alternative
is that a process request be granted all the resources it needs at once, prior to execution.
The second alternative is to disallow a process from requesting resources whenever it
has previously allocated resources. This strategy requires that all the resources a process
will need must be requested at once. The system must grant resources on “all or
none” basis. If the complete set of resources needed by a process is not currently
available, then the process must wait until the complete set is available. While the
process waits, however, it may not hold any resources. Thus the “wait for” condition is
denied and deadlocks simply cannot occur. This strategy can lead to serious waste of
resources.
For example, a program requiring ten tap drives must request and receive all ten drives
before it begins executing. If the program needs only one tap drive to begin execution
and then does not need the remaining tap drives for several hours then substantial
computer resources (9 tape drives) will sit idle for several hours. This strategy can
cause indefinite postponement (starvation), since not all the required resources may
become available at once.
High Cost
When a process releases resources, the process may lose all its work to that point.
One serious consequence of this strategy is the possibility of indefinite postponement
(starvation).A process might be held off indefinitely as it repeatedly requests and
releases the same resources.
The last condition, the circular wait, can be denied by imposing a total ordering on all
of the resource types and than forcing all processes to request the resources in order
(increasing or decreasing). This strategy imposes a total ordering of all resource types,
and requires that each process requests resources in a numerical order of enumeration.
With this rule, the resource allocation graph can never have a cycle.
For example, provide a global numbering of all the resources, as shown in the given
Table 1:
87
Introduction to Operating Table 1: Numbering the resources
Systems and Process
Management
Number Resource
1 Floppy drive
2 Printer
3 Plotter
4 Tape Drive
5 CD Drive
Each process declares maximum number of resources of each type that it may
need.
Keep the system in a safe state in which we can allocate resources to each process
in some order and avoid deadlock.
Check for the safe state by finding a safe sequence: <P1, P2, ..., Pn> where
resources that Pi needs can be satisfied by available resources plus resources held
by Pj where j < i.
88
Resource allocation graph algorithm uses claim edges to check for a safe state. Deadlocks
The resource allocation state is now defined by the number of available and allocated
resources, and the maximum demands of the processes. Subsequently the system can
be in either of the following states:
Safe state: Such a state occur when the system can allocate resources to each
process (up to its maximum) in some order and avoid a deadlock. This state will
be characterised by a safe sequence. It must be mentioned here that we should
not falsely conclude that all unsafe states are deadlocked although it may eventually
lead to a deadlock.
Unsafe State: If the system did not follow the safe sequence of resource allocation
from the beginning and it is now in a situation, which may lead to a deadlock, then
it is in an unsafe state.
Deadlock State: If the system has some circular wait condition existing for some
processes, then it is in deadlock state.
Let us study this concept with the help of an example as shown below:
Consider an analogy in which 4 processes (P1, P2, P3 and P4) can be compared with
the customers in a bank, resources such as printers etc. as cash available in the bank
and the Operating system as the Banker.
Table 2
Processes Resources Maximum
used resources
P1 0 6
P2 0 5
P3 0 4
P4 0 7
Available resource = 1
This is an unsafe state.
If all the processes request for their maximum resources respectively, then the operating
system could not satisfy any of them and we would have a deadlock.
Important Note: It is important to note that an unsafe state does not imply the existence
or even the eventual existence of a deadlock. What an unsafe state does imply is
simply that some unfortunate sequence of events might lead to a deadlock.
The Banker’s algorithm is thus used to consider each request as it occurs, and see if
granting it leads to a safe state. If it does, the request is granted, otherwise, it is postponed
until later. Haberman [1969] has shown that executing of the algorithm has a complexity
proportional to N2 where N is the number of processes and since the algorithm is
executed each time a resource request occurs, the overhead is significant.
Limitations of the Banker’s Algorithm
There are some problems with the Banker’s algorithm as given below:
It is time consuming to execute on the operation of every resource.
If the claim information is not accurate, system resources may be underutilized.
Another difficulty can occur when a system is heavily loaded. Lauesen states that
in this situation “so many resources are granted away that very few safe sequences
remain, and as a consequence, the jobs will be executed sequentially”. Therefore,
the Banker’s algorithm is referred to as the “Most Liberal” granting policy; that is,
it gives away everything that it can without regard to the consequences.
New processes arriving may cause a problem.
– The process’s claim must be less than the total number of units of the resource
in the system. If not, the process is not accepted by the manager.
90
– Since the state without the new process is safe, so is the state with the new Deadlocks
process. Just use the order you had originally and put the new process at the
end.
– Ensuring fairness (starvation freedom) needs a little more work, but isn’t too
hard either (once every hour stop taking new processes until all current
processes finish).
A resource becoming unavailable (e.g., a tape drive breaking), can result in an
unsafe state.
Check Your Progress 1
1) What is a deadlock and what are the four conditions that will create the deadlock
situation?
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
4.6 SUMMARY
A deadlock occurs a process has some resource and is waiting to acquire another
resource, while that resource is being held by some process that is waiting to acquire
the resource that is being held by the first process.
A deadlock needs four conditions to occur: Mutual Exclusion, Hold and Wait, Non-
Preemption and Circular Waiting.
We can handle deadlocks in three major ways: We can prevent them, handle them
when we detect them, or simply ignore the whole deadlock issue altogether.
4.7 SOLUTIONS/ANSWERS
Check Your Progress 1
1) A set of processes is in a deadlock state when every process in the set is waiting
for an event that can only be caused by another process in the set. A deadlock
situation can arise if the following four conditions hold simultaneously in a system:
92
Mutual Exclusion: At least one resource must be held in a non-shareable Deadlocks
mode; that is, only one process at a time can use the resource. If another
process requests the resource, the requesting process must be delayed until
the resource has been released.
Hold and Wait: A process must be holding at least one resource and waiting
to acquire additional resources that are currently being held by other
processes.
Circular Wait: A set {P0, P1, P2, …, Pn} of waiting processes must exist
such that P0 is waiting for a resource that is held by P1, P1 is waiting for a
resource that is held by P2, …, Pn-1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
2) Deadlock avoidance deals with processes that declare before execution how many
resources they may need during their execution. Given several processes and
resources, if we can allocate the resources in some order as to prevent a deadlock,
the system is said to be in a safe state. If a deadlock is possible, the system is said
to be in an unsafe state. The idea of avoiding a deadlock is to simply not allow the
system to enter an unsafe state which may cause a deadlock. We can define what
makes an unsafe state.
For example, consider a system with 12 tape drives and three processes: P0, P1
and P2. P0 may require 10 tape drives during execution, P1 may require 4, and
P2 may require up to 9. Suppose that at some moment in time, P0 is holding on to
5 tape drives, P1 holds 2 and P2 holds 2 tape drives. The system is said to be in
a safe state, since there is a safe sequence that avoids the deadlock. This sequence
implies that P1 can instantly get all of its needed resources (there are 12 total tape
drives, and P1 already has 2, so the maximum it needs is 2, which it can get since
there are 3 free tape drives). Once it finishes executing, it releases all 4 resources,
which makes for 5 free tape drives, at which point, P0 can execute (since the
maximum it needs is 10), after it finishes, P2 can proceed since the maximum it
needs is 9.
Now, here’s an unsafe sequence: Let’s say at some point of time P2 requests one
more resource to make its holding resources 3. Now the system is in an unsafe
state, since only P1 can be allocated resources, and after it returns, it will only
leave 4 free resources, which is not enough for either P0 or P2, so the system
enters a deadlock.
94