0% found this document useful (0 votes)
99 views47 pages

II BSC IV SEM OS

The document outlines the syllabus for a B.Sc II Year course on Operating Systems, detailing five units covering topics such as the definition and functions of operating systems, process management, memory management, and file management. It includes a breakdown of various types of operating systems, their objectives, and examples of popular systems like Windows, macOS, and Linux. Additionally, it discusses the role of the processor and the importance of the Process Control Block in managing processes.

Uploaded by

fabapa1805
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
99 views47 pages

II BSC IV SEM OS

The document outlines the syllabus for a B.Sc II Year course on Operating Systems, detailing five units covering topics such as the definition and functions of operating systems, process management, memory management, and file management. It includes a breakdown of various types of operating systems, their objectives, and examples of popular systems like Windows, macOS, and Linux. Additionally, it discusses the role of the processor and the importance of the Process Control Block in managing processes.

Uploaded by

fabapa1805
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

B.

Sc-2 (Semester-IV) Operating Systems

B.Sc II - Year (IV Semester) : Operating Systems


Syllabus – W.ef 2020-21
Semester Course Code Course Title Hours Credits
IV C5 OPERATING SYSTEMS 60 4

UNIT- I
What is Operating System? History and Evolution of OS, Basic OS functions,
Resource Abstraction, Types of Operating Systems– Multiprogramming Systems,
Batch Systems, Time Sharing Systems; Operating Systems for Personal
Computers, Workstations and Hand- held Devices, Process Control & Real time
Systems.
UNIT- II
Processor and User Modes, Kernels, System Calls and System Programs, System
View of the Process and Resources, Process Abstraction, Process Hierarchy,
Threads, Threading Issues, Thread Libraries; Process Scheduling, Non-Preemptive
and Preemptive Scheduling Algorithms.

UNIT III
Process Management: Deadlock, Deadlock Characterization, Necessary and
Sufficient Conditions for Deadlock, Deadlock Handling Approaches: Deadlock
Prevention, Deadlock Avoidance and Deadlock Detection and Recovery.
Concurrent and Dependent Processes, Critical Section, Semaphores, Methods for
Inter- process Communication; Process Synchronization, Classical Process
Synchronization Problems: Producer-Consumer, Reader-Writer.

UNIT IV
Memory Management: Physical and Virtual Address Space; Memory Allocation
Strategies– Fixed and -Variable Partitions, Paging, Segmentation, Virtual Memory.

UNIT V
File and I/O Management, OS Security: Directory Structure, File Operations, File
Allocation Methods, Device Management, Pipes, Buffer, Shared Memory, Security
Policy Mechanism, Protection, Authentication and Internal Access Authorization

REFERENCE BOOKS:
1. Operating System Principles by Abraham Silberschatz, Peter Baer Galvin and
GregGagne (7thEdition) Wiley India Edition.
2. Operating Systems: Internals and Design Principles by Stallings (Pearson)
3. Operating Systems by J. Archer Harris (Author), Jyoti Singh (Author) (TMH)

Jagan’s Degree & PG College Page 1 of 47


B.Sc-2 (Semester-IV) Operating Systems

UNIT-1 Operating System Introduction


Q.What is an Operating System and it’s objectives?
Operating System: Operating System is an interface or mediater between user and computer system
(Hardware).
 An operating system is a program that manages the computer hardware. An operating system is
an important part of almost every computer system.
 A computer system can be divided roughly into four components: the hardware, the operating sys tem,
the application programs, and the users.

Fig: Computer System Architecture

Objectives and Functions (or) Services of Operating System:


An operating system is system software, that acts as interface between user and
computer. The various services or functions provided by an operating system are as
follows:
1. Program Execution 6. User Interface
2. I/O Operations 7. Multitasking
3. File System Manipulation 8. Security
4. Error Handling 9. Networking
5. Resource Manager
1. Program Execution:
 The number of steps is needed to perform a program execution.
 Program must be loaded into main memory, I/O devices and files must be initialized,
and other resources must be prepared.
 The OS handles these tasks for the user to execute programs.
2. I/O Operations:
 Each Input/Output device requires its own set of instructions or signals for operation.
 I/O operation means read or write operation with specific I/O device.
 The Operating System provides to access the I/O devices, whenever required.
3. File System manipulation:
 A file is a collection of related information. The files are stored in secondary storage
device.
 For easy access, files are grouped together into directories.

Jagan’s Degree & PG College Page 2 of 47


B.Sc-2 (Semester-IV) Operating Systems

 The various file operations are creating/deleting files, backup the files, Mapping files
onto secondary memory etc.
4. Error Handling:
 The various types of errors can occur, while a computer system is running.
 These include internal and external hardware errors, such as memory errors, device
failure
errors etc.
 In each
case, the OS
is

responsible to clear the errors, without effect on running applications.


5. Resource Manager:
 A computer has a set of resources for storing, processing of data, and also control of
these functions.
 The OS is responsible for managing these resources.
6. User Interface:
 OS provides an environment like CUI (Character User Interface) or GUI (Graphical User
Interface) to the user, to use the computer easily.
 It also translates various instructions given by user.
 Hence it acts as interface between user and computer.
7. Multitasking:
 OS automatically skips from one task to another, when multiple tasks are executed.
• For example, typing text, listening to music, printing the information and so on.
 The Operating System is responsible for executing all the tasks at the same time.
8. Security:
 OS provides the security to protect various resources against unauthorized users.
 It also uses timer, that do not allow unauthorized processes to access the CPU.
9. Networking:
 Networking is used for exchanging the information between different computers.
 These computers are connected by using various communication links, such as
telephone lines or buses.
Resource Abstraction & Types of Operating Systems (Evaluation of OS):
Resource abstraction It is the process of “hiding the details of how the hardware
operates, thereby making computer hardware relatively easy for an application programmer
to use
Operating Systems are classified into different categories, Following are some of the most
widely used types of Operating system.
1. Simple Batch System
2. Multiprogramming System
3. Distributed Systems
4. Real Time Systems
5. Time sharing Operating Systems
1. Simple Batch Systems:

Jagan’s Degree & PG College Page 3 of 47


B.Sc-2 (Semester-IV) Operating Systems

In Batch Processing System, computer programs are executed as 'batches'. In this


system, programs are collected, grouped and executed at a time.
 In Batch Processing System, the user has to submit a job (written on cards or tape) to
a Computer operator.
 The Computer operator grouped all the jobs together as batches serially.
 Then computer operator places a batch of several jobs into input device.
 Then a special program called the Monitor ,it executes each program in the batch.
 The Batches are executed one after another at a defined time interval.
 Finally, the operator receives the output of all jobs, and returns them to the concerned
users.
2. Multiprogramming (Multitasking) Systems:
In a multiprogramming system, one or more programs are loaded into main memory.
Only one program is executed at a time by the CPU. All the other programs are waiting for
execution.

Process 1

Process 2 CPU

Process 3

 This operating system picks and begins to execute one job from memory.
 Once this job needs an I/O operation, then operating system switches to another job
(CPU and OS always busy).
 Jobs in the memory are always less than the number of jobs on disk (Job Pool).
 If several jobs are ready to run at the same time, then OS chooses which one to run
based on CPU Scheduling methods.
 In Multiprogramming system, CPU will never be idle and keeps on processing.

3. Distributed Operating System:


In Distributed Operating System, the workload is shared between two or more
computers, linked together by a network.
 A network is a communication path between two or more computer systems.
 In Distributed Operating system, computers are called Nodes.
 It provides an illusion (imagination) to its users, that they are using single computer.
 Different computers are linked together by a communication network. i.e., LAN (Local
Area Network) or WAN (Wide Area Network).
 The Distributed Operating system has the following two models:
1. Client-Server Model

Jagan’s Degree & PG College Page 4 of 47


B.Sc-2 (Semester-IV) Operating Systems

2. Peer-to-Peer Model
1. Client-Server Model: In this model, the Client sends a resource request to the
Server, and the server provides the requested resource to the Client. The following
diagram shows the Client-Server Model.
Server
Network

Client Client Client Client


2. Peer-to-Peer Model: In P2PModel, the Peers are computers, which are connected to
each other via network. Files can be shared directly between systems on the
network, without need of a central Server. The following diagram shows the Peer-to-
Peer Model.
Network

Client Client Client Client


4. Real-Time Operating System
 A Real Time Operating System (RTOS) is a special-purpose operating system.
 RTOS is a very fast and small operating system. It is also called Embedded system.
 It is used to control scientific experiments, industrial control systems, rockets, home
appliances, weapon systems etc.
 RTOS is divided into the following two categories:
1. Hard Real-Time System
2. Soft Real-Time System
1. Hard Real-Time System:It is guarantees that, critical tasks are completed within
the time. If the task is not completed within the time, then the system is considered
to be failed.
Ex: Nuclear systems, some medical equipment, flight control systems etc.
2. Soft Real-Time System: It is less restrictive system. If the task is not completed
within the time, then the system is not considered to be failed.
Ex: Multimedia (Games), Home Appliances etc.
5. Time-Sharing OS:
 It allows multiple users simultaneously share CPU’s time.
 This OS, allots a time slot to each user for execution.
 When the job expires, then the OS allocates the CPU time to
next user on the system.
 The time slot period is between 10-100ms this time is
called as time slice or a quantum

Jagan’s Degree & PG College Page 5 of 47


B.Sc-2 (Semester-IV) Operating Systems

Operating Systems for Personal Comp’s


1. Microsoft Windows
 Microsoft created the Windows operating system in the mid-1980s.
 There have been many different versions of Windows, but the most recent ones are
Windows 10 (released in 2015), Windows 8 (2012), Windows 7 (2009), and Windows
Vista (2007).
 Windows comes pre-loaded on most new PCs, which helps to make it the most popular
operating system in the world.

2. macOS
 macOS (previously called OS X) is a line of operating systems created by Apple.
 It comes preloaded on all Macintosh computers, or Macs.
 Some of the specific versions include Mojave (released in 2018), High Sierra (2017),
and Sierra (2016).
3. Solaris
 Best for Large workload processing, managing multiple databases, etc.
 Solaris is a UNIX based operating system which was originally developed by Sun
Microsystems in the mid-’90s.
 In 2010 it was renamed as Oracle Solaris after Oracle acquired Sun Microsystems. It is
known for its scalability and several other features that made it possible such as Dtrace,
ZFS and Time Slider
4. Linux
 The Linux was introduced by Linus Torvalds and the Free Software Foundation (FSF).
 Linux (pronounced LINN-ux) is a family of open-source operating systems,
 which means they can be modified and distributed by anyone around the world.
 This is different from proprietary software like Windows, which can only be modified by
the company that owns it.
 The advantages of Linux are that it is free, and there are many different distributions—or
versions—you can choose from.
5. Chrome OS
Best For a Web application.
Chrome OS is another Linux-kernel based operating software that is designed by Google. As
it is derived from the free chromium OS, it uses the Google Chrome web browser as its
principal user interface. This OS primarily supports web applications.
WORKSTATIONS

 Workstation is a computer used for engineering applications


(CAD/CAM), desktop publishing, software development, and
other such types of applications which require a moderate

Jagan’s Degree & PG College Page 6 of 47


B.Sc-2 (Semester-IV) Operating Systems

amount of computing power and relatively high quality graphics capabilities.


 Workstations generally come with a large, high-resolution graphics screen, large amount of
RAM, inbuilt network support, and a graphical user interface. Most workstations also have
mass storage device such as a disk drive, but a special type of workstation, called diskless
workstation, comes without a disk drive.
 Common operating systems for workstations are UNIX and Windows NT. Like PC,
workstations are also single-user computers like PC but are typically linked together to
form a local-area network, although they can also be used as stand-alone systems.

Process Control
Process Control Block is a data structure that contains information of the process related
to it. The process control block is also known as a task control block, entry of the process
table, etc.

Process Control Block (PCB):


 The Process Control Block (PCB) is a data structure, which is created and managed by
Operating System. It is also called Task Control Block.
 Each process is represented in the operating system by a Process Control Block.
 Each and every process has its own PCB. The information in the PCB is updated, during
the process execution.
 The PCB contains sufficient information. So that it is possible to interrupt a running
process, and later resume execution.
 The Process Control Block contains the following information:
1. Identifier: It contains unique value, which is Identifier
assigned by OS, at the time of process creation. Process state
2. State: It contains the process current state. The Priority
process state may be new, ready, running, waiting, Program Counter
or terminated. Memory Pointers
3. Priority:It contains the priority level value. It is
Context Data
related to other processes.
I/O Status Information
4. Program Counter: It contains the address of the
Accounting Information
next instruction to be executed in the program.
:
5. Memory Pointers: It contains addresses of the
:
instructions and data related to process.
6. Context Data: It contains the data, which isstored in CPU registers, while the
process is executing.
7. I/O Status Information: It contains a list of I/O devices allocated to the process, a
list of open files, and so on.
8. Accounting Information: It contains the amount of processor time used, time limits,
account numbers, and so on

Jagan’s Degree & PG College Page 7 of 47



B.Sc-2 (Semester-IV) Operating Systems
UNIT-2

Processor
Processor is a hardware component which Controls the all operations of the computer
system. It is regularly called to as Central Processing Unit (CPU).

 A processor is an integrated electronic circuit that performs the calculations that run a
computer.
 A processor performs arithmetical, logical, input/output (I/O) and other basic instructions
that are passed from an operating system (OS).
 Most other processes are dependent on the operations of a processor.
 The CPU is just one of the processors inside a personal computer (PC).
 The Graphics Processing Unit (GPU) is another processor, and even some hard drives are
technically capable of performing some processing.

Processor Registers: A Register is a small memory, that resides in the processor. It provides
data quickly currently executing programs (process). A register can be 8-bit, 16-bit, 32-
bits, or 64-bit.
a) PC:PC stands for Program Counter .It contains the address of next instruction to be
executed.
b) IR:IR stands for Instruction Register. It stores the currently being executed instruction.
c) MAR:MAR stands for Memory Address Register. It stores the address of the data or
instruction, fetched from the main memory.
d) MBR:MBR stands for Memory Buffer Register.It stores the data or instruction fetched,
from the main memory. It then copied into Instruction Register (IR) for execution.
e) I/OAR: I/O AR stands for Input/Output Address Register. It specifies a particular I/O
device.
f) I/OBR:I/O BR stands for Input/Output Buffer Register. It is used for exchanging the
data between an I/O module and the processor.

Jagan’s Degree & PG College Page 8 of 47


B.Sc-2 (Semester-IV) Operating Systems

User Mode and Kernel Mode.


There are two modes of operation in the operating system to make sure it works correctly.
These are
1. User mode
2. Kernel mode.
1. User Mode
The system is in user mode when the operating system is running a user application
such as handling a text editor.
While in the User Mode, the CPU executes the processes that are given by the user in
the User Space.
The mode bit is set to 1 in the user mode. It is changed from 1 to 0 when switching
from user mode to kernel mode.
2. Kernel Mode
 A Kernel is a computer program that is the heart of an Operating System.
 The system starts in kernel mode when it boots and after the operating system is loaded,
it executes applications in user mode.
 There are certain instructions that need to be executed by Kernel only. So, the CPU
executes these instructions in the Kernel Mode only.
Ex:- memory management should be done in Kernel-Mode only
 The mode bit is set to 0 in the kernel mode. It is changed from 0 to 1 when switching from
kernel mode to user mode.
 The Operating System has control over the system,
 The Kernel also has control over everything in the system.
 The Kernel remains in the memory until the Operating System is
shut-down.
 It provides an interface between the user and the hardware
components of the system. When a process makes a request to
the Kernel, then it is called System Call.
Functions of a Kernel
 Access Computer resource: 
 Resource Management
 Memory Management: 
 Device Management:
In the above image, the user process executes in the user mode until it gets a system
call. Then a system trap is generated and the mode bit is set to zero. The system call gets
executed in kernel mode. After the execution is completed, again a system trap is generated

Jagan’s Degree & PG College Page 9 of 47


B.Sc-2 (Semester-IV) Operating Systems

and the mode bit is set to 1. The system control returns to kernel mode and the process
execution continues.

System Call

 A system call is a way for programs to interact with the operating system.
 A computer program makes a system call when it makes a request to the operating
system’s kernel.
 System call provides the services of the operating system to the user programs via
Application Program Interface(API).
 It provides an interface between a process and operating system. All programs needing
resources must use system calls.
Services Provided by System Calls :
1. Process creation and management
2. Main memory management
3. File Access, Directory & File system management
4. Device handling(I/O)
5. Protection
6. Networking, etc.
Types of System Calls : There are 5 different categories of system calls –
1. Process control: end, abort, create, terminate, allocate and free memory.
2. File management: create, open, close, delete, read file etc.
3. Device management
4. Information maintenance
5. Communication

System Programs

System Programming can be defined as the act of building Systems Software using System
Programming Languages.

According to Computer Hierarchy, which comes at last is Hardware. Then it is


Operating System, System Programs, and finally Application Programs.

Jagan’s Degree & PG College Page 10 of 47


B.Sc-2 (Semester-IV) Operating Systems

Process Concepts
Process:
A Process is a program in the execution. A system consists of a collection of processes.
All the processes are executed in sequential fashion. Operating system processes are
executing the system code, and user processes are executing the user code.
Process Hierarchy

Process States:
 The process state is defined as the current activity of the process.
 A process goes through various states, during its execution.
 The Operating system placed all the processes in a FIFO (First In First Out) queue for
execution.
 A dispatcheris a program; it switches the processor from one process to another for
execution.
 The different process states are as follows.

New Terminate
admitted
dispatch complete
Ready Running
timeout

Event occurs Event wait

Waiting
1. New State: The New state defines that, a process is being admitted (created) by an
operating system.
2. Ready State: The Ready state defines that, the process ready to execute. i.e., waiting for
a chance of execution.

Jagan’s Degree & PG College Page 11 of 47


B.Sc-2 (Semester-IV) Operating Systems

3. Running State: The Running state defines that, the instructions of a process are being
executed.
4. Waiting State: The Waiting state defines that, the process is waiting for some event to
occur, such as the completion of an I/O operation. It is also known as Blocked state.
5. Terminated State: The Terminated state defines that, the process has finished its
execution. The process can be either completely executed or aborted for some reasons.
State Transitions of a Process
The process states are divided into different combinations
1. NullNew
2. NewReady
3. ReadyRunning
4. RunningTerminated
5. RunningReady
6. RunningWaiting
7. WaitingReady

Process Creation and Termination (or) Operations on Process


The Operating System must provide a facility for process creation and termination. The
processes are created and deleted dynamically.
1. Process Creation:
When a new process is added, the Operating System creates a Process Control Block
and allocates space in main memory. These steps are called as Process Creation.
Example: Opening MS-Word software
When the O.S creates a new process, by the request of another process then it is
referred as “Process Spawning”. When one process spawns (produces) another, then the
former process is called as Parent process, and the spawned (produced) process is called
as Child process.
Example: Printing from MS-Word software
2. Process Termination:
An operating system terminates a process in different situations. While termination, all
the process related information is released from the main memory.
Example: Closing MS-Word software
Reasons for process termination:
A process can be terminated due to the following reasons:
 Normal completion of the process
 Time limit exceeded
 I/O Failure
 Invalid instruction executed
 Parent process terminated

Jagan’s Degree & PG College Page 12 of 47


B.Sc-2 (Semester-IV) Operating Systems

Process Scheduling (or) CPU Scheduling


The process scheduling is to assign processes to the processor for execution. It
is the method of executing multiple processes at a time in a multiprogramming
system.
Hence, the CPU scheduling helps to achieve system objectives such as response time,
CPU utilization, waiting time etc. In many systems, the scheduling task is divided into three
separate functions. They are
1. Long-Term Scheduler
2. Short-Term Scheduler
3. Medium-Term Scheduler

New

Long-term Long-term
scheduler scheduler
Ready/
Suspend Ready Short-term Running Exit
Medium-term
scheduler scheduler

1. Long-Term Scheduler:
 A Long-Term Scheduler determines, which programs are admitted to the system for
processing.
 Once admitted a program, it becomes a process, and is added to the queue.
 It controls the degree of Multi-programming. i.e., the no.of processes present in ready
state at any time.
 The Long-Term Scheduler is also called as Job Scheduler.

2. Short-Term Scheduler:
 The Short-Term Scheduler is also known as CPUScheduler or Dispatcher.
 It decides which process will execute next in the CPU. i.e., Ready to Running state.
 It also preempts the currently running process, to execute another process.
 The main aim of this scheduler is, to enhance CPU performance and increase process
execution rate.
3. Medium-Term Scheduler:
 The Medium-Term Scheduler is responsible for suspending and resuming the
processes.
 It mainly does Swapping. i.e., moving processes from Main memory to secondary
memory and vice versa.
 The Medium-Term Scheduler reduces the degree of Multi-programming.

Jagan’s Degree & PG College Page 13 of 47


B.Sc-2 (Semester-IV) Operating Systems

Process Scheduling Algorithms


Scheduling algorithms are used to decide, which of the process in the queue should be
allocated to the CPU. An Operating System uses Dispatcher, which assigns a process to the
CPU.
Types of Scheduling Algorithms:
The scheduling algorithms are classified into two types. They are as follows:

Scheduling Algorithms

Non- Preemptive Preemptive


Algorithms Algorithms

I. Non-Preemptive Algorithms:
A non-preemptive algorithm will not prevent currently running process. In this
case, once the process enters into CPU execution, it cannot be pre-empted, until it
completes its execution.
Ex: (1). First Come First Serve (FCFS)
(2). Shortest Job First (SJF)
II. Preemptive Algorithms:
A preemptive algorithm will prevent the currently running process. In this case,
the currently running process may be interrupted and moves to the Ready state. The
preemptive decision is performed, when a new process arrives or when an interrupt
occurs or a time-out occurs.
Ex: Round Robin (RR)
1) First Come First Serve [FCFS] Algorithm:
 The FCFS algorithm is a simplest and straight forward scheduling algorithm.
 It follows Non-Preemptive scheduling algorithm method.
 In this algorithm, processes are executed on first-come and first-served basis.
 This algorithm is easy to understand and implement.
 The problem with this algorithm is, the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the processes arrive in the order P1, P2, P3, then Gantt chart of this scheduling is
as follows.

P1 P2 P3
0 24 27 30
Jagan’s Degree & PG College Page 14 of 47
B.Sc-2 (Semester-IV) Operating Systems

2) Shortest Job First [SJF] Algorithm:


 It is also called as Shortest Process Next (SPN).
 It follows Non-Preemptive scheduling algorithm method.
 The SJF algorithm is faster than the FCFS.
 The process with least burst-time is selected from the ready queue for execution.
 This is the best approach to minimize waiting time.
 The problem with SJF is that, it requires the prior knowledge of burst-time of each
process.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 6
P2 8
P3 7
P4 3

TheGantt chart of SJF scheduling is as follows.

P4 P1 P3 P2
0 3 9 16 24
3) Round Robin [RR] Algorithm:
 The Round Robin scheduling algorithm was used in Time-sharing System.
 It is one of the most widely used algorithms.
 A fixed time (Quantum) is allotted to each process for execution.
 If the running process doesn’t complete within the quantum, then the process is
preempted.
 The next process in the ready queue is allocated the CPU for execution.
 The problem with this algorithm is , the average waiting time is too long.
Example: Consider the following processes that arrive at time 0.
Burst Time
Process
(Milliseconds)
P1 24
P2 3
P3 3

If the time quantum is 4 milliseconds, then Gantt chart of this scheduling is as


follows.

P1 P2 P3 P1 P1 P1 P1 P1
0 4 7 10 14 18 22 26 30

Jagan’s Degree & PG College Page 15 of 47


B.Sc-2 (Semester-IV) Operating Systems

Threads
 A Thread is also called as “Light Weight Process” (or) a single unit of a process.
 Thread has its own Program Counter (PC), a register set, and a stack.
 It shares some information from other threads like process code, data, and open files.
 A traditional process has a single thread of control. It is also called “Heavy Weight
Process”.
 If the process contains multiple threads of control, then it can do more than one task at a
time.
 Many software packages that run on modern computers are multi threaded.
 For Example, MS-Word software uses multiple threads like performing spelling and
grammar checking in background, auto save, etc.

Reference Image:

Jagan’s Degree & PG College Page 16 of 47


B.Sc-2 (Semester-IV) Operating Systems

Threading Issues
Following threading issues are:
a) The fork() and exec() system call
b) Signal handling
c) Thread cancelation
d) Thread Pools
e) Thread local storage
a. The fork() and exec() system calls
 The fork() is used to create a duplicate process.
The meaning of the fork() and exec() system
calls change in a multithreaded program.
 If a thread calls the fork(), does the new process duplicate all threads
 If a thread calls the exec() system call, the program specified in the parameter to exec()
will replace the entire process which includes all threads.
b. Signal Handling
Generally, signal is used in UNIX systems to notify a process that a particular event
has occurred.
A signal received either synchronously or asynchronously, based on the source of and
the reason for the event being signaled.
All signals, whether synchronous or asynchronous, follow the same pattern as given
below
 A signal is generated by the occurrence of a particular event.
 The signal is delivered to a process. 
 Once delivered, the signal must be handled.
c. Cancellation
Termination of the thread in the middle of its execution is called ‘thread cancellation.
Threads that are no-longer required can be cancelled by another thread in one of two
techniques:
1. Asynchronies cancellation
2. Deferred cancellation
1. Asynchronies Cancellation
It means cancellation of thread immediately
2. Deferred Cancellation
In this method a flag is sets that indicating the thread should cancel itself when it is
feasible
For example − If multiple database threads are concurrently searching through a
database and one thread returns the result the remaining threads might be cancelled.

Jagan’s Degree & PG College Page 17 of 47


B.Sc-2 (Semester-IV) Operating Systems

d. Thread polls
 Multithreading in a web server, whenever the server receives a request it creates a
separate thread to service the request
 A thread pool is to create a number of threads at process start-up and place them into a
pool, where they sit and wait for work.
e. Thread Local Storage
The benefit of using threads in the first place is that Most data is shared among the
threads but, sometimes threads also need thread explicit data.
The Major libraries of threads are pThreads, Win32 and java which provide support for
thread specific which is called as TLS thread local storage

Thread Libraries
 Thread libraries provide programmers with an Application Program Interface for
creating and managing threads.
 Thread libraries may be implemented either in user space or in kernel space
There are two primary ways of implementing thread library, Those are
 The first way is to provide a library entirely in user space with kernel support
 The second way is to implement a kernel level library supported directly by the
operating system.
 There are Three Main Thread Libraries in use today:
1. POSIX Pthreads - may be provided as either a user or kernel library, as an extension to
the POSIX standard.
 pThreads are available on Solaris, Linux, Mac OSX, Tru64, and via public domain
shareware for Windows.
 Global variables are shared amongst all threads. 
 One thread can wait for the others to rejoin before continuing. 
2. Win32 threads - provided as a kernel-level library on Windows systems.
 It is Similar to pThreads.
3. Java threads –
 Since Java generally runs on a Java Virtual Machine,
 The implementation of threads is based upon whatever OS and hardware
 The JVM is running on, i.e. either Pthreads or Win32 threads depending on the
system.

Jagan’s Degree & PG College Page 18 of 47


B.Sc-2 (Semester-IV) Operating Systems

UNIT-3 Process Management


Deadlock
Deadlock: “Deadlock is a situation, when a set of processes are blocked, because each
process is loading a resource, and waiting for another resource, acquired by some other
process”. (or) The Deadlock is a situation when several processes may compete for a
finite number of resources.
In a multiprogramming system, a process requests a resource, and if the resource is
not available then the process enters a waiting state. The Waiting process may never change
state, because the resources are held by other waiting process. This situation is called a
Deadlock.
Consider the following Resource Allocation Graph.
R1

Assigned to
Waiting for

P P

Assigned to
Waiting for

R2

From the above Resource allocation graph, process P1 is holding the resource R1, and
waiting for the resource R2, which is assigned by process P2, and process P2 is waiting for
resource R2. This situation is called Deadlock.

Deadlock characterization (or) Conditions for Deadlock


There are following 4 conditions that cause the occurrence of a deadlock.
1) Mutual exclusion: At least one resource must be held in a non-sharable mode. It means
only one process at a time can use the resource. If another process requests the same
resource, the requesting process must be wait until the resource has been released.
2) Hold and wait: A process must be holding at least one resource and is waiting for
another resource that is held by other process.
3) No preemption: Resources cannot be preempted. It means, no resource can be forcibly
removed from a process holding it.

Jagan’s Degree & PG College Page 19 of 47


B.Sc-2 (Semester-IV) Operating Systems

4) Circular wait: If processes are waiting for resources in a circle. For example, P1 is
holding resource R1, and is waiting for resource R2. Similarly, P2holding the resource
R2, and is waiting for resource R1.

Resource –Allocation Graph:


A Resource-Allocation graph is an analytical tool, that is used to verify whether a system
is in a deadlock state or not.

R1

Assigned to
Waiting for

P P

Assigned to
Waiting for

R2

R1Wa it
Circular

Held
Request

P P

Held
Request

R2

No Deadlock

R1 R1

P P

Resource is required Resource is held

From the above all diagrams, P1, P2 represents processes. R1, R2 represents
resources. Dot ( ) represents each instance of that resource.

Jagan’s Degree & PG College Page 20 of 47


B.Sc-2 (Semester-IV) Operating Systems

Methods for Handling Deadlocks


The deadlock problem can be solved in three ways. They are
1. Deadlock prevention
2. Deadlock Avoidance
3. Deadlock detection and recovery

(1). Deadlock Prevention


When the four conditions (mutual exclusion, hold and wait, no preemption, circular
wait) hold in the system, then deadlock occurs. If one of these conditions cannot hold, we can
prevent the occurrence of a deadlock. The strategy of dead lock prevention is simply
designing a system, in such a way that the possibility of dead lock is excluded.
a) Mutual exclusion:
 The Mutual exclusion condition can be prevented, whenever the resources are sharable.
 Sharable resources do not require mutually exclusive access, and thus cannot be
involved in a deadlock.
 Read-only files are a good example of a sharable resource.
 Some resources such as files may allow multiple access for read, but exclusively access
for writing.
 If more than one process requires write permission, a dead lock can occurs.
b) Hold & wait:
 The hold-and-wait condition can be prevented, whenever a process requests a resource, it
does not hold any other resources.
 There are two approaches for this:
 One approach is, Requires that all processes request all resources at one time.
 Another approach is, Requires that processes holding resources must release them,
before requesting new resource. And then re-acquire the released resource along
with new request.
c) No preemption:
 If a process request for a resource, which is held by another waiting resource, then the
requested resource may be preempted from the waiting resource.
 In the second approach, if a process request for a resource, which are not presently
available, then all other resources that it holds are preempted.
d) Circular wait:
 The circular-wait condition can be prevented, when the each resource will be assigned
with a numerical number.
 A process can request for the resource, only in increasing order of numbering.

Jagan’s Degree & PG College Page 21 of 47


B.Sc-2 (Semester-IV) Operating Systems

 For example, if P1process is allocated R5 resource. Now next time, if P1 ask for R4, R3,
which are lesser than R5, such request will not be granted. Only request for resources,
more than R5 will be granted.

(2). Deadlock Avoidance


 In the dead lock avoidance, we restrict resources requests to prevent at least one of the
four conditions of dead lock.
 This leads to inefficient use of resources and inefficient execution of processes.
 With dead lock avoidance, a decision is made dynamically where the current resource
allocation request will be granted.
 If it is granted potentially, it leads to a dead lock. Dead lock avoidance requires the
knowledge of further process resource request.
 In this we can describe two approaches to dead lock avoidance.
 Don’t start a process, if its demands may leads to dead lock.
 Don’t grant an incremental resource requested by a process, if this allocation lead to
dead lock.
 The Deadlock Avoidance algorithm ensures that, a process will never enter into unsafe or
deadlock state.
 Each process declare the maximum number of resources of each type that it may need,
number of available resources, allocated resources, maximum demand of the processes.
 Processes inform operating system in advance, that how many resources they will need.
 If we allocated the resources in an order for each process, according to requirements, and
deadlock cannot be occur. Then this state is called as Safe state.
 A safe state in not a deadlocked state, and not all unsafe states are deadlocked. But an
unsafe state, deadlock may occur.
 We can recognize deadlock by using Banker’s algorithm.

Unsafe

Deadlock

Safe

Resource allocation:
Consider a system with a finite number of processes and finite number of resources. At
any time a process may have zero or more resources allocated to it.The state of the system is

Jagan’s Degree & PG College Page 22 of 47


B.Sc-2 (Semester-IV) Operating Systems

reflected by the current allocation of resources to processes. The state may be safe state or
unsafe state.
Safe State:

Unsafe State:

(3). Deadlock Detection and Recovery


 If a system does not use either a deadlock-prevention or a deadlock avoidance algorithm,
when a deadlock situation may occur.
 Deadlock Detection and Recovery technique is used, after system into deadlock situation.
 Resource allocation Graph (RAG) is used in deadlock detection algorithm.

Jagan’s Degree & PG College Page 23 of 47


B.Sc-2 (Semester-IV) Operating Systems

 The Detection algorithm that examines the state of the system, to detect whether a
deadlock has occurred.
 The Recovery algorithm is used to recover from the deadlock.
1. Dead Lock Detection: Deadlock detection is the process of whether a deadlock exists
or not, and identify the processes and resources involved in the deadlock. The basic idea
is, to check allocation of resource availability, and to determine if the system is in
deadlocked state.
Detection strategies do not restrict process actions. With deadlock detection, requested
resources are granted to processes whenever possible. Periodically, the OS performs an
algorithm, to detect the circular wait condition.
1. A deadlock exists, if and only if, there are unmarked processes at the end of the
algorithm.
2. Each unmarked process is deadlocked.
3. The strategy in this algorithm is to find a process, whose request can be satisfied
with the available resources.
2. Deadlock Recovery: When a detection algorithm finds that a deadlock exists, then
several recovery methods used.
a) Process Termination: To eliminate deadlocks by aborting a process, we use one of two
methods. In both methods, the system reclaims all resources allocated to the
terminated processes.
1. Abort all deadlocked processes: This method clearly will break the deadlock
cycle. These processes are computed for a long time, and the results of these
partial computations must be discarded, and recomputed later.
2. Abort one process at a time, until the deadlock cycle is eliminated:This
method is very complicated to implement, even after each process is aborted.A
deadlock-detection algorithm determines, whether any processes are still
deadlocked.
b) Resource Preemption: Resources are preempted from the processes that are involved
in deadlock. Then preempted resources are allocated to other processes. So that, there
is a possibility of recovering the system from deadlock.

Concurrency Condition
Concurrency means that an application is making progress on more than one task at the
same time (concurrently). Well, if the computer only has one CPU the application may
not make progress on more than one task at exactly the same time, but more

than one task is being processed at a time inside the application. It does notcompletely
finish one task before it begins the next.

Jagan’s Degree & PG College Page 24 of 47


B.Sc-2 (Semester-IV) Operating Systems

There are several kinds of concurrency. In a single processor operating system, there
really is little point to concurrency except to support multi users, or support threads that
are likely to become blocked waiting on I/O and you don't want to waste CPU cycles. In a
multi-processor or core system, then concurrency can greatly speed up some throughput.

Process Synchronization
Process Synchronization means sharing system resources by processes in such a way
that, Concurrent access to shared data is handled thereby minimizing the chance of
inconsistent data. Maintaining data consistency demands mechanisms to ensure
synchronized execution of cooperating processes.

Process Synchronization was introduced to handle problems that arose while


multiple process executions.
Process synchronization can be provided by using several different tools like
semaphores, mutex and monitors.
Synchronization is important for both user applications and implementation of
operating system.

Critical Section Problem


Consider a system consisting of n processes (P0, P1, ………Pn -1) each process has a
segment of code which is known as critical section in which the process may be changing
common variable, updating a table, writing a file and so on. The important feature of
the system is that when the process is executing in its critical section no other process
is to be allowed to execute in its critical section. The execution of critical sections by the
processes is a mutually exclusive. The critical section problem is to design a protocol
that the process can use to cooperate each process must request permission to enter
its critical section. The section of code implementing this request is the entry section.
The critical section is followed on exit section. The remaining code is the remainder
section.

Example:
While (1)
{
Entry Section;
Critical
Section;Exit Section;

Remainder Section;
}
A solution to the critical section problem must satisfy the following three conditions.

Jagan’s Degree & PG College Page 25 of 47


B.Sc-2 (Semester-IV) Operating Systems

1. Mutual Exclusion: If process Pi is executing in its critical section then no


any other processcan be executing in their critical section.
2. Progress: If no process is executing in its critical section and some
process wish to enter their critical sections then only those process that
are not executing in their remainder section can enter its critical
section next.
3. Bounded waiting: There exists a bound on the number of times that
other processes areallowed to enter their critical sections after a
process has made a request.

Semaphores
Semaphore is a synchronization tool defined by Dijkstra in 1965 for managing
concurrent process by using the value of simple variable.

Semaphore is a simply a variable. This variable is used to solve critical section problem
and to achieve process synchronization in the multi processing environment.

For the solution to the critical section problem one synchronization tool is used which
is known assemaphores. A semaphore ‘S‘ is an integer variable which is accessed
through two standard

operations such as wait and signal. These operations were originally termed ‘P‘ (for
wait means to test) and ‘V‘ (for single means to increment). The classical
definition of wait is

Wait (S)
{
While (S <= 0)
{
Test;
}
S--;
}
The classical definition of the
signal isSignal (S)

{
S++;
}

Jagan’s Degree & PG College Page 26 of 47


B.Sc-2 (Semester-IV) Operating Systems

In case of wait the test condition is executed with interruption and the
decrement is executedwithout interruption.

Wait: The wait operation decrements the value of its argument S, if it is positive. If Sis
negative or zero, then no operation is performed.

Signal: The signal operation increments the value of its argument S.

Types of Semaphores:

Binary Semaphore:
A binary semaphore is a semaphore with an integer value which can range
between 0 and 1. Let ‘S‘ be a counting semaphore. To implement the binary
semaphore we need following the structure of data.

Binary Semaphores
S1, S2;int C;

Initially S1 = 1, S2 = 0 and the value of C is set to the initial value of the counting
semaphore ‗S‘.Then the wait operation of the binary semaphore can be
implemented as follows.
Wait (S1)
C--;
if (C < 0)
{
Signal (S1);
Wait (S2);
} Signal (S1);
The signal operation of the binary semaphore can be implemented
as follows:
Wait (S1);
C++;
if (C <=0)
Signal (S2);
Else
Signal (S1);

Classical Problem on Synchronization


There are various types of problem which are proposed for synchronization scheme such as

Bounded Buffer Problem(Producer-Consumer): This problem was commonly


used to illustrate the power of synchronization primitives. In this scheme we

Jagan’s Degree & PG College Page 27 of 47


B.Sc-2 (Semester-IV) Operating Systems

assumed that the pool consists of ‘N‘ buffer and each capable of holding one item.
The ‘mutex‘ semaphore provides mutual exclusion for access to the buffer pool and
is initialized to the value one. The empty and full semaphores count the number of
empty and full buffer respectively. The semaphore empty is initialized to ‘N‘ and the
semaphore full is initialized to zero. This problem is known as procedure and
consumer problem. The code of the producer is producing full buffer and the
code of consumer is producing empty buffer. The structure of producer process is
as follows:

do {
produce an item in nextp
............
Wait (empty);
Wait (mutex);
...........
add nextp to buffer
............
Signal (mutex);
Signal (full);
} While (1);
The structure of consumer process is as follows:
do {
Wait (full);
Wait (mutex);
...........
Remove an item from buffer to nextc
...........
Signal (mutex);
Signal (empty);
............
Consume the item in nextc;
. . . . . . . .. . . .. .
} While (1);

Jagan’s Degree & PG College Page 28 of 47


B.Sc-2 (Semester-IV) Operating Systems

Reader Writer Problem: In this type of problem there are two types of process
are used such as Reader process and Writer process. The reader process is
responsible for only reading and the writer process is responsible for writing.
This is an important problem of synchronization which has several variations
like
o The simplest one is referred as first reader writer problem which
requires that no reader will be kept waiting unless a writer has
obtained permission to use the sharedobject. In other words no
reader should wait for other reader to finish because a writer is
waiting.
o The second reader writer problem requires that once a writer is
ready then the writerperforms its write operation as soon as
possible.
The structure of a reader process is as
follows:Wait (mutex);

Read count++;
if (read count == 1)

Wait (wrt);

Signal (mutex);
...........
Reading is performed
...........
Wait (mutex);
Read count --;
if (read count == 0)
Signal (wrt);

Signal (mutex);
The structure of the writer process is as follows:
Wait (wrt);
Writing is performed;
Signal (wrt);

Jagan’s Degree & PG College Page 29 of 47


B.Sc-2 (Semester-IV) Operating Systems

1. What is Deadlock? Explain about Deadlock?


2. What is Deadlock? Explain Deadlock Prevention?
3. What is Deadlock? Explain about Deadlock Avoidance?
4. What is Deadlock? Explain about Deadlock Detection and Recovery?
5. What is Semaphore? Discuss about Semaphore?
6. What are the solutions of Critical Section problem?
7. Define Concurrency?
8. Explain Classical Inter process Communication problems? (OR) Explain Classicproblems Of
Synchronization?

Jagan’s Degree & PG College Page 30 of 47


B.Sc-2 (Semester-IV) Operating Systems

UNIT - 4 Memory
EnM
d aUnnaitg-e
4ment & Virtual Memory

Operating System - Memory Management

 Memory management is the functionality of an operating system which handles or


manages primary memory and moves processes back and forth between main memory
and disk during execution.
 Memory management keeps track of each and every memory location,
 It checks how much memory is to be allocated to processes.
 It decides which process will get memory at what time.
 It tracks whenever some memory gets freed or unallocated and correspondingly it updates
the status.
The operating system takes care of mapping the logical addresses to physical addresses at
the time of memory allocation to the program.
Memory Addresses & Description
There are three types of addresses used in a program before and after memory is allocated

1 Symbolic addresses :- The addresses used in a source code. The variable names,
constants, and instruction labels are the basic elements of the symbolic address space.
2 Relative addresses :- At the time of compilation, a compiler converts symbolic addresses
into relative addresses.
3 Physical addresses :- The loader generates these addresses at the time when a program is
loaded into main memory.
The set of all logical addresses generated by a program is referred to as a logical
address space. The set of all physical addresses corresponding to these logical addresses is
referred to as a physical address space.

VIRTUAL (LOGICAL) AND PHYSICAL ADDRESS SPACE:

 An address generated by the CPU is commonly referred to as a logical address.


 It loaded into MEMORY ADDRESS REGISTER of memory which is called as physical
address.
 The compile-time and load-time address-binding methods generate identical logical and
physical addresses.
 The set of all logical addresses generated by a program is a logical address space
 The set of all physical addresses corresponding to these logical addresses is a physical
address space
 The run-time mapping from virtual to physical addresses is done by a
hardware device called the MEMORY MANAGEMENT UNIT (MMU)

Jagan’s Degree & PG College Page 31 of 47


B.Sc-2 (Semester-IV) Operating Systems

 With respect to above diagram,

If the base is at 14000, then an attempt by the user to address location 0 is


dynamically relocated to location 14000; an access to location 346 is mapped
to location 14346.

Memory Allocation
 One of the simplest methods for allocating memory is to divide memory into several fixed-
sized partitions.
 Each partition may contain exactly one process. Thus, the degree of multiprogramming is
bound by the number of partitions.
 In this multiple partition method, when a partition is free, a process is selected from the
input queue and is loaded into the free partition.
 When the process terminates, the partition becomes available for another process.
 In the Variable partition method, the operating system keeps a table, indicating which
parts of memory are available and which are occupied.
 Initially, all memory is available for user processes and is considered one large block of
available memory, a hole.
 The first fit, best fit and worst fit strategies are the most commonly used schemes to select
a free hole from the set of available holes.

First fit: Allocate the first hole that is big enough. Searching can start either at the beginning
of the set of holes or at the location where the previous first-fit search ended. We can stop
searching as soon as we find a free hole that is large enough.

Best fit: Allocate the smallest hole that is big enough. We must search the entire list, unless
the list is ordered by size. This strategy produces the smallest leftover hole.

Worst fit: Allocate the largest hole. Again, we must search the entire list, unless it is sorted by
size. This strategy produces the largest leftover hole, which may be more useful than the smaller
leftover hole from a best-fit approach.

Jagan’s Degree & PG College Page 32 of 47


B.Sc-2 (Semester-IV) Operating Systems

MEMORY ALLOCATION STRATEGIES


1. Fixed Partitioning :
 The fixed partitioning is a contiguous memory management technique
 In the main memory is divided into fixed sized partitions which can be equal or unequal
size.
 Whenever we have to allocate a process memory then a free partition that is big enough to
hold the process is found. Then the memory is allocated to the process.
 If there is no free space available then the process waits in the queue to be allocated
memory.
 It is one of the most oldest memory management technique which is easy to implement.

2. Variable Partitioning :
 The variable partitioning is a contiguous memory management technique
 in the main memory is not divided into partition
 The space which is left is considered as the free space which can be further used by other
processes.
 It also provides the concept of compaction.
 In compaction the spaces that are free and the spaces which not allocated to the process
are combined and single large memory space is made.

Jagan’s Degree & PG College Page 33 of 47


B.Sc-2 (Semester-IV) Operating Systems

Paging

A computer can address more memory than the amount physically installed on
the system. This extra memory is actually called virtual memory and it is a section of
a hard that's set up to emulate the computer's RAM. Paging technique plays an
important role in implementing virtual memory.

Paging is a memory management technique in which process address space is


broken into blocks of the same size called pages (size is power of 2, between 512
bytes and 8192 bytes). The size of the process is measured in the number of pages.

Similarly, main memory is divided into small fixed-sized blocks of (physical)


memory called frames and the size of a frame is kept the same as that of a page to
have optimum utilization of the main memory and to avoid external fragmentation

Jagan’s Degree & PG College Page 34 of 47


B.Sc-2 (Semester-IV) Operating Systems

Address Translation

Page address is called logical address and represented by page number and the
offset.

Logical Address = Page number + page offset

Frame address is called physical address and represented by a frame number and
the offset.

Physical Address = Frame number + page offset

A data structure called page map table is used to keep track of the relation between a
page of a process to a frame in physical memory.

When the system allocates a frame to any page, it translates this logical address into a
physical address and create entry into the page table to be used throughout execution of the
program
Advantages and Disadvantages of Paging
 Paging reduces external fragmentation, but still suffer from internal fragmentation. 
 Paging is simple to implement and assumed as an efficient memory management
technique.
 Due to equal size of the pages and frames, swapping becomes very easy.
 Page table requires extra memory space, so may not be good for a system having small
RAM 

Segmentation
Segmentation is another way of dividing the addressable memory. It is another
scheme of memory management and it generally supports the user view of memory.
The Logical address space is basically the collection of segments. Each segment has a
name and a length.
Basically, a process is divided into segments. Like paging, segmentation divides
or segments the memory. But there is a difference and that is while the paging

Jagan’s Degree & PG College Page 35 of 47


B.Sc-2 (Semester-IV) Operating Systems

divides the memory into a fixed size and on the other hand, segmentation divides the
memory into variable segments these are then loaded into logical memory space.
A Program is basically a collection of segments. And a segment is a logical unit
such as:
 Main program
 Procedure
 Function
 Method
 Object
 Local variable and global variables.
 Symbol table
 Common block
 Stack
 Arrays
Types of Segmentation
Given below are the types of Segmentation:
 Virtual Memory Segmentation With this type of segmentation, each process is
segmented into n divisions and the most important thing is they are not
segmented all at once. 
 Simple Segmentation With the help of this type, each process is segmented
into n divisions and they are all together segmented at once exactly but at the
runtime and can be non-contiguous (that is they may be scattered in the
memory). 
Characteristics of Segmentation
Some characteristics of the segmentation technique are as follows:
 The Segmentation partitioning scheme is variable-size.
 Partitions of the secondary memory are commonly known as segments.
 Partition size mainly depends upon the length of modules.
 Thus with the help of this technique, secondary memory and main memory are
divided into unequal-sized partitions.

Jagan’s Degree & PG College Page 36 of 47


B.Sc-2 (Semester-IV) Operating Systems

Advantages of Segmentation
1. No internal fragmentation
2. Average Segment Size is larger than the actual page size.
3. Less overhead
4. It is easier to relocate segments than entire address space.
5. The segment table is of lesser size as compared to the page table in paging.
Disadvantages
1. It can have external fragmentation.
2. it is difficult to allocate contiguous memory to variable sized partition.
3. Costly memory management algorithms

Operating System - Virtual Memory


A computer can address more memory than the amount physically installed on
the system. This extra memory is actually called virtual memory and it is a section
of a hard disk that's set up to emulate the computer's RAM.
The main advantage of this scheme is that programs can be larger than
physical memory. Virtual memory serves two purposes. First, it allows us to extend
the use of physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully
in main memory.
 User written error handling routines are used only when an error occurred in
the data or computation. 
 Certain options and features of a program may be used rarely. 
 Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
 The ability to execute a program that is only partially in memory would counter
many benefits.
 Less number of I/O would be needed to load or swap each user program into
memory.

Jagan’s Degree & PG College Page 37 of 47


B.Sc-2 (Semester-IV) Operating Systems

 A program would no longer be constrained by the amount of physical memory


that is available. 
 Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory management
unit, or MMU, is built into the hardware. The MMU's job is to translate virtual
addresses into physical addresses. A basic example is given below

Virtual memory is commonly implemented by demand paging. It can also be


implemented in a segmentation system. Demand segmentation can also be used to
provide virtual memory.

Jagan’s Degree & PG College Page 38 of 47


B.Sc-2 (Semester-IV) Operating Systems

UNIT - 5 File and I/O Management, OS Security



Directory Structure
 A Directory (folder) is a special file, which contains information about stored files.
 The directory structure organizes all the files in the system.
 A disk in the system can store huge information in the form of files.
 To manage all the files, a directory structure must exist on each disk.
 Each disk in a system is divided into separate partitions. These partitions are called
virtual disks or drives.
 Each partition is treated as a separate storage device.
 Each partition contains a default directory called root directory.
 The root directory may contain subdirectories. Again each subdirectory may contain
subdirectories.
Various operations on the directory:
1. Searching a file:Finding the entry for a particular file in the directory structure.
2. Creating a file: New file is created, and added to the directory
3. Deleting a file:When a file is not needed, it can be removed from the directory.
4. Listing directory: To view all or some of files of the directory.
5. Renaming a file:Changing the name of file in a directory.
Types of Directory Structure:
The directory structure can be classified into two types. They are
1. Single-Level directory
2. Two-Level directory.
1. Single-Level Directory: The simplest directory structure is the single-level directory. All
files are contained in the same directory, which is easy to support and understand.
However, when the number of files increases or when the system has more than
one user, a single-level directory has limitations. All files are in the same directory, they
must have different names.
\ Root Directory

File1 File2 File3 File4

2. Two-Level Directory: A Two-level directory contains separate directories for each user.
Each directory can contain subdirectories. So, different users may create files with the
same name within a directory.
\ Root Directory

Dir A Dir B File4

Dir C File1
File2 File3

Jagan’s Degree & PG College Page 39 of 47


B.Sc-2 (Semester-IV) Operating Systems

FILE OPERATIONS IN OS
A file is a collection of related information. The files are stored in secondary storage
devices. In general, a file is a sequence of bits, bytes, lines, or records.
The information in a file is defined by its creator. Different types of information may be
stored in a file. The information may be source programs, object programs, executable
programs, numeric data, text, images, sound recordings, video information, and so on.

Here are some common file operations are:


 File Create operation
 File Delete operation
 File Open operation
 File Close operation
 File Read operation
 File Write operation
 File Append operation
 File Seek operation
 File Get attribute operation
 File Set attribute operation
 File Rename operation
File Create Operation
 The file is created with no data.
 The file create operation is the first step of the file.
 Without creating any file, there is no any operation can be performed.
File Delete Operation
 File must has to be deleted when it is no longer needed just to free up the disk space.
 The file delete operation is the last step of the file.
 After deleting the file, it doesn't exist.
File Open Operation
 The process must open the file before using it.
File Close Operation
 The file must be closed to free up the internal table space, when all the accesses are
finished and the attributes and the disk addresses are no longer needed.
File Read Operation
 The file read operation is performed just to read the data that are stored in the
required file.
File Write Operation
 The file write operation is used to write the data to the file, again, generally at the
current position.
File Append Operation
 The file append operation is same as the file write operation except that the file append
operation only add the data at the end of the file.
File Seek Operation
 For random access files, a method is needed just to specify from where to take the
data. Therefore, the file seek operation performs this task.
File Rename Operation

Jagan’s Degree & PG College Page 40 of 47


B.Sc-2 (Semester-IV) Operating Systems

 The file rename operation is used to change the name of the existing file.

File Allocation Methods


The file allocation methods define how the files are stored in the disk blocks. There are
three main disk space or file allocation methods.
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
The main idea behind these methods is to provide:
 Efficient disk space utilization.
 Fast access to the file blocks.
All the three methods have their own advantages and disadvantages as discussed below:

1. Contiguous Allocation
In this scheme, each file occupies a contiguous
set of blocks on the disk.
For example, if a file requires n blocks and is
given a block b as the starting location, then the blocks
assigned to the file will be: b, b+1, b+2,……b+n-1. This
means that given the starting block address and the
length of the file (in terms of blocks required), we can
determine the blocks occupied by the file.
The directory entry for a file with contiguous allocation
contains
 Address of starting block
 Length of the allocated portion.
The file ‘mail’ in the following figure starts from the
block 19 with length = 6 blocks. Therefore, it occupies 19,
20, 21, 22, 23, 24 blocks.
2. Linked List Allocation
In this scheme, each file is a linked list of disk
blocks which need not be contiguous. The disk blocks can
be scattered anywhere on the disk.
The directory entry contains a pointer to the starting and
the ending file block. Each block contains a pointer to the
next block occupied by the file.
The file ‘jeep’ in following image shows how the
blocks are randomly distributed. The last block (25) contains
-1 indicating a null pointer and does not point to any other
block.
3. Indexed Allocation
In this scheme, a special block known as the Index block
contains the pointers to all the blocks occupied by a file.
Each file has its own index block. The ith entry in the

Jagan’s Degree & PG College Page 41 of 47


B.Sc-2 (Semester-IV) Operating Systems

index block contains the disk address of the ith file block. The directory entry contains the
address of the index block as shown in the image:
Device Management in Operating System

 Device management means controlling the Input/Output devices like disk, microphone,
keyboard, printer, magnetic tape, USB ports, etc.
 A process may require various resources, including main memory, file access, and access
to disk drives, and others.
 If resources are available, they could be allocated, and control returned to the CPU.
Otherwise, the procedure would have to be postponed until adequate resources become
available.
 The system has multiple devices, and in order to handle these physical or virtual devices,
the operating system requires a separate program known as device controller. It also
determines whether the requested device is available.
The fundamentals of I/O devices may be divided into three categories:
1. Boot Device
2. Character Device
3. Network Device
1. Boot Device
It stores data in fixed-size blocks, each with its unique address. For example- Disks.
2. Character Device
It transmits or accepts a stream of characters, none of which can be addressed individually.
For instance, keyboards, printers, etc.
3. Network Device
It is used for transmitting the data packets.
File Protection
File protection is keeping information safe in a computer system from physical damage
and improper access. Physical damage of files in the disks can occur due to hardware
problems. Improper access is due to misuse of files by the unauthorized users.
1. Protection from Physical damage:
 Protection from physical damage is, generally provided by maintaining duplicate copies
of files.
 Many computers have systems programs that automatically copy files to tape regularly
(once per day or week or month) to maintain a copy.
 The administrator or the user must maintain this procedure to protect important
information.
Reasons for physical damage: File systems can be damaged by various reasons. Some of
them are:
 Continuous use of hardware: Disks can be damaged due to continuous use of
reading and writing of files
 Power failures: Frequent Power (electrical) problems can damage the system
physically.
 Sabotage: Sabotage means intentional damage.

Jagan’s Degree & PG College Page 42 of 47


B.Sc-2 (Semester-IV) Operating Systems

 Accidental damage: A user can damage a disk unintentionally.


 Software errors: Various bugs in the software also sometimes damage the hardware.
2. Protection from improper access:
Protection from improper access can be provided in many ways. This can be permit or
reject to access files from authorised or unauthorised users.
a) Controlled Access:
 Granting or removing only necessary access rights to the users. So, a user has
controlled access.
 Access is permitted or denied depending on the type of access requested.
 Different types of operations may be controlled such as read file, write file, execute
file and so on.
b) Password protection:
 Another approach to the protection problem is, to set a password to each file.
 Therefore each file can be controlled by a password.
 Only those users who know the password can access file.
c) Authentication:
 Authentication is the proper verification of the users’ identity, whetherthe user is
authorised or not.
 Various biometric systems can be used to prevent unauthorised user entry.
Inter Process Communication - Pipes
 A Pipe is a communication between two or more related or interrelated processes.
 It can be either within one process or a communication between the child and the parent
processes.
 Communication can also be multi-level such as between the parent, child and the grand-
child, etc. Communication is achieved by one process writing into the pipe and other
reading from the pipe.
 To achieve the pipe system call, create two files, one to write into the file and another to
read from the file.
Ex:- Pipe mechanism can be viewed with a real-time
such as filling water with the pipe into some container,
say a bucket, and someone retrieving it, say with a mug. The filling process is nothing but
writing into the pipe and the reading process is nothing but retrieving from the pipe. This
implies that one output (water) is input for the other (bucket).=
Two-way Communication Using Pipes
 In this process the parent and the child needs to
write and read from the pipes simultaneously, the
solution is a two-way communication using pipes.
 Two pipes are required to establish two-way
communication

Jagan’s Degree & PG College Page 43 of 47


B.Sc-2 (Semester-IV) Operating Systems

Step 1 − Create two pipes. First one is for the parent to write and child to read, say as pipe1.
Second one is for the child to write and parent to read, say as pipe2.
Step 2 − Create a child process.
Buffer
The buffer is an area in the main memory used to store or hold the data temporarily.
In other words, buffer temporarily stores data transmitted from one place to another,
either between two devices or an application.
The act of storing data temporarily in the buffer is called buffering.
Types of Buffering:-
There are three main types of buffering in the operating system, such as:

1. Single Buffer
In Single Buffering, only one buffer is used to transfer the data between two devices.
The producer produces one block of data into the buffer. After that, the consumer
consumes the buffer. Only when the buffer is empty, the processor again produces the data.

2. Double Buffer
In Double Buffering, two schemes or two buffers are used in the place of one.
In this buffering, the producer produces one buffer while the consumer consumes
another buffer simultaneously. So, the producer not needs to wait for filling the buffer.
Double buffering is also known as buffer swapping.

3. Circular Buffer
When more than two buffers are used, the buffers' collection is called a circular
buffer.

Jagan’s Degree & PG College Page 44 of 47


B.Sc-2 (Semester-IV) Operating Systems

Each buffer is being one unit in the circular buffer. The data transfer rate will increase
using the circular buffer rather than the double buffering.

Buffering Works
In an operating system, buffer works in the following way:

Shared Memory
 Shared memory is a memory shared between two or more processes.
 Each process has its own address space;
 if any process wants to communicate with some information
from its own address space to other processes, then it is only
possible with IPC (inter-process communication) techniques.
 Shared memory is the fastest inter-process communication
mechanism.
 The operating system maps a memory segment in the address
space of several processes to read and write in that memory segment without calling
operating system functions.
To use shared memory, we have to perform two basic steps:
1. Request a memory segment that can be shared between processes to the operating
system.

Jagan’s Degree & PG College Page 45 of 47


B.Sc-2 (Semester-IV) Operating Systems

2. Associate a part of that memory or the whole memory with the address space of the
calling process.

Operating System Security Policy Mechanism

The process of ensuring OS availability, confidentiality, integrity is known as operating


system security.
OS security refers to the processes or measures taken to protect the operating system
from dangers, including viruses, worms, malware, and remote hacker intrusions.
Security refers to providing safety for computer system resources like software, CPU,
memory, disks, etc.
It can protect against all threats, including viruses and unauthorized access. It can be
enforced by assuring the operating system's integrity, confidentiality, and availability.

System security may be threatened through two violations, and these are as follows:

1. Threat

A program that has the potential to harm the system seriously.

2. Attack

A breach of security that allows unauthorized access to a resource.

The goal of Security System

There are several goals of system security. Some of them are as follows:

1. Integrity

Unauthorized users must not be allowed to access the system's objects

2. Secrecy

The system's objects must only be available to a small number of authorized users.
The system files should not be accessible to everyone.

3. Availability

All system resources must be accessible to all authorized users, i.e., no single
user/process should be able to consume all system resources

Jagan’s Degree & PG College Page 46 of 47


B.Sc-2 (Semester-IV) Operating Systems

Authentication and Authorization


Authentication in Operating System
 In authentication process, the identity of users are checked for providing the access to the
system
 Authentication is done before the authorization process,
 Authentication mechanism determines the users identity before revealing the sensitive
information.
 It is very crucial for the system or interfaces where the user priority is to protect the
confidential information.
 In the process, the user makes a provable claim about individual identity (his or her) or an
entity identity.
 The credentials or claim could be a username, password, fingerprint etc.

Access Authorization
 The authorization process, person’s or user’s authorities are checked for accessing the
resources.
 whereas authorization process is done after the authentication process.
 Authorization is the process of giving permission to do or have something.
 The system administrator defines for the system which users are allowed access the file
directories, hours of access, amount of allocated storage space, etc.

Jagan’s Degree & PG College Page 47 of 47

You might also like