0% found this document useful (0 votes)
24 views

Operating Systems Concepts

This document provides an overview of operating system concepts. It discusses operating system structures, processes, CPU scheduling, file systems, storage management, security, and deadlocks. It also defines operating systems and their roles in managing computer resources and allowing users to run applications. Examples of operating systems discussed include DOS, UNIX, Windows, Macintosh, and embedded operating systems.

Uploaded by

Jeremiah Kakulu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Operating Systems Concepts

This document provides an overview of operating system concepts. It discusses operating system structures, processes, CPU scheduling, file systems, storage management, security, and deadlocks. It also defines operating systems and their roles in managing computer resources and allowing users to run applications. Examples of operating systems discussed include DOS, UNIX, Windows, Macintosh, and embedded operating systems.

Uploaded by

Jeremiah Kakulu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 147

OPERATING SYSTEMS

CONCEPTS

03/15/2024 COM 221 1


Module outline
• INTRODUCTION
• Developments in Operating System
• OPERATING SYSTEM STRUCTURES
• PROCESSES
• CPU SCHEDULING
• FILE SYSTEM INTERFACE
• FILE SYSTEM IMPLEMENTATION
• FREE SPACE MANAGEMENT
• STORAGE MANAGEMENT
• FILE SYSTEM INTERFACE
• SECURITY
• DEADLOCKS
03/15/2024 COM 221 2
Definition:
An Operating System is a computer program
that manages the resources of a computer. It
accepts keyboard or mouse inputs from users
and displays the results of the actions and
allows the user to run applications, or
communicate with other computers via
networked connections.

03/15/2024 COM 221 3


• Software that controls the operation of a computer, directs
the input and output of data, keeps track of files, and
controls the processing of computer programs.
• Its roles include managing the functioning of the computer
hardware, running the applications programs, serving as an
interface between the computer and the user, and allocating
computer resources to various functions.
• When several jobs reside in the computer simultaneously
and share resources (multitasking), the OS allocates fixed
amounts of CPU time and memory in turn or allows one job
to read data while another writes to a printer and still
another performs computations.
• Through a process called time-sharing, a large computer can
handle interaction with hundreds of users simultaneously,
giving each the perception of being the sole user.
03/15/2024 COM 221 4
Types of OS
Real-time
• A real-time operating system is a multitasking
operating system that aims at executing real-time
applications.
• Real-time operating systems often use specialized
scheduling algorithms so that they can achieve a
deterministic nature of behavior.
• The main objective of real-time operating systems
is their quick and predictable response to events.
03/15/2024 COM 221 5
Multi-user vs. Single-user
• A multi-user operating system allows multiple
users to access a computer system
concurrently.
• Time-sharing system can be classified as multi-
user systems as they enable a multiple user
access to a computer through the sharing of
time.
• Single-user operating systems, as opposed to a
multi-user operating system, are usable by a
single user at a time.
03/15/2024 COM 221 6
Multi-tasking vs. Single-tasking
• When only a single program is allowed to run
at a time, the system is grouped under a single-
tasking system.
• However, when the operating system allows
the execution of multiple tasks at one time, it is
classified as a multi-tasking operating system.
• Multi-tasking can be of two types: pre-emptive
or co-operative. In pre-emptive multitasking,
the operating system slices the CPU time and
dedicates one slot to each of the programs.
03/15/2024 COM 221 7
Distributed
• A distributed operating system manages a
group of independent computers and makes
them appear to be a single computer.
• The development of networked computers
that could be linked and communicate with
each other gave rise to distributed computing.
• Distributed computations are carried out on
more than one machine. When computers in a
group work in cooperation, they make a
distributed system.
03/15/2024 COM 221 8
Embedded
• Embedded operating systems are designed to
be used in embedded computer systems. They
are designed to operate on small machines
like PDAs with less autonomy.
• They are able to operate with a limited
number of resources. They are very compact
and extremely efficient by design.
• Windows CE and Minix 3 are some examples
of embedded operating systems.

03/15/2024 COM 221 9


Examples of Operating Systems
DOS
• DOS (Disk Operating System) was the first widely-
installed operating system for personal computers. It is a
master control program that is automatically run when
you start your PC.
• DOS stays in the computer all the time letting you run a
program and manage files. It is a single-user operating
system from Microsoft for the PC.
• It was the first OS for the PC and is the underlying control
program for Windows 3.1, 95, 98 and ME. Windows NT,
2000 and XP emulate DOS in order to support existing
DOS applications.
•03/15/2024
To use DOS, you must know where your programs and 10
COM 221
UNIX
• UNIX operating systems are used in widely-sold
workstation products from Sun Microsystems, Silicon
Graphics, IBM, and a number of other companies.
• The UNIX environment and the client/server program
model were important elements in the development
of the Internet and the reshaping of computing as
centered in networks rather than in individual
computers.
• Linux, a UNIX derivative available in both "free
software" and commercial versions, is increasing in
popularity as an alternative to proprietary operating
systems.
03/15/2024 COM 221 11
WINDOWS
• Windows is a personal computer operating
system from Microsoft that, together with
some commonly used business applications
such as Microsoft Word and Excel, has become
a de facto "standard" for individual users in
most corporations as well as in most homes.
• Windows contains built-in networking, which
allows users to share files and applications with
each other if their PCs are connected to a
network.
03/15/2024 COM 221 12
MACINTOSH
• The Macintosh (often called "the Mac"),
introduced in 1984 by Apple Computer, was the
first widely-sold personal computer with a
graphical user interface (GUI).
• The Mac was designed to provide users with a
natural, intuitively understandable, and, in
general, "user-friendly" computer interface.
• This includes the mouse, the use of icons or small
visual images to represent objects or actions, the
point-and-click and click-and-drag actions, and a
number of window operation ideas.
03/15/2024 COM 221 13
We can view an OS as a resource allocator.

• A computer system has many resources


(hardware and software) that may be required
to solve a problem; CPU time, memory space,
file storage space, I/O devices and so on. The
O.S acts as a manager of these resources and
allocates them to specific programs and users
as necessary for their tasks.
• One can view an O.S as a program running at
all times on the computer.
03/15/2024 COM 221 14
Aims and Objectives
• Some issues of resource allocation
• Allocation Mechanisms
• Allocation Policies

03/15/2024 COM 221 15


General Issues
• In a multi-processing environment, usually processes
request more resources than there are available.
• It would be too expensive to provide enough non-
shareable resources to cope with maximum demand.
• OS must ensure that processes get the resources they
request without compromising system integrity.
• E.g., non-shareable resources cannot be shared
amongest processes, because otherwise input or
output could be meaningless (Printer intermingling
output from several processes).

03/15/2024 COM 221 16


Allocating Resources
Issues:
• Mutual Exclusion of processes from non-
shareable resources.
• Deadlock should be handled sensibly.
• Ensure high level of resource utilization.
• Processes should be allocated resources within
a reasonable length of time.
• The last two points compromise each other. OS
can ensure high level of resource utilization,
but sometimes compromising response times
for process completion.
03/15/2024 COM 221 17
Allocation Mechanisms
• A resource is a component of the computer system for
which processes can compete. Typically these are:
• Central Processors
– Usually 1, but in a multi-processor environment there will
be more than one.
– Processors may be several of the same type, or some may
have specific characteristics, to be dedicated to certain
tasks.
– OS needs to know what each processor can do to help it
allocate processor to a process - uses Processor Descriptor.
– Each processor may have its own processor queue, to
queue jobs.

03/15/2024 COM 221 18


• Memory
– Processes and data need to be in primary memory
to be operated on.
– Memory is finite, and Memory Manager must
create enough space in primary memory to
accommodate a process for it to become
runnable.
• Peripherals
– Each device has a device descriptor
– IORBs are added to device request queue.

03/15/2024 COM 221 19


• Backing Store
– Can be used for Virtual Memory - managed by memory manager
– Can also be used for File Store - managed by filing system.
– Requests for space granted, unless there is no space left, or user
has exceeded quota.
– Requests cannot be queued.
» Files
– Collections of related information (Programs, Data, Etc.)
– Access may be restricted - security
– Access may be limited - non-shareable in write mode
– Access may be unlimited - shareable in read mode
– Requests may be queued until the expiration of some timeout.
• Allocation mechanisms are handled by the OS at the
appropriate Layer, but policies for using the mechanisms
should be standard throughout the OS.
03/15/2024 COM 221 20
Allocation Policies
Deadlock
• When Process A holds non-shareable resource X
and requests non-shareable resource Y, and
Process B holds resource Y and requests resource
X.
• 4 Necessary and Sufficient conditions for deadlock
– Mutual Exclusion - non-shareable resources
– Hold and Wait
– No Pre-emption
– Circular Waiting

03/15/2024 COM 221 21


• Deadlock can be solved by
– preventing any one of the above conditions from
holding
– Detection and recovery
– Avoidance by anticipation

03/15/2024 COM 221 22


If at least one link cannot be removed, then deadlock

03/15/2024 COM 221 23


Recovering from Deadlock

Four Main Methods of Deadlock Recovery


• Abort all deadlocked processes
• Restart all deadlocked processes from a
checkpoint
• Abort processes one at a time
• Pre-empt resources from processes
• Deadlock Recovery usually performed by
operator

03/15/2024 COM 221 24


Deadlock Avoidance
• Try to anticipate deadlock
• Will deny resource requests if algorithm decides
that granting the request could lead to
deadlock.
• Because of non determinacy of OS, algorithm is
not always correct, and denies requests that
would not have led to deadlock.
• Deadlock does not occur immediately after
allocating a resource, so cannot simply make
projection on state graph and then run deadlock
detection algorithm
03/15/2024 COM 221 25
03/15/2024 COM 221 26
Dijkstra's Banker's Algorithm
• Keep a track of all processes' current usage,
future needs, and current availability of
resources.
• Deadlock will be avoided if request plus
current usage is less than or equal to process's
claim, and if after the request is granted there
is a sequence in which all processes can run to
completion even if they request their full claim
- safe state

03/15/2024 COM 221 27


Example, using 18 tape decks
• Current Maximum
Loan Need

Process(1) 1 4
Process(2) 4 6
Process(3) 5 8

Available 8

03/15/2024 COM 221 28


Case A

Process(2) requests and obtains 2 remaining


decks. Process(2) can run to completion, releasing
all 6 decks providing sufficient resources for the
other processes to complete - safe state

Case B
Process(1) requests one deck. State unsafe
because there are not enough resources to
guarantee completion of all processes - request
denied.
03/15/2024 COM 221 29
Advantages of the Banker's Algorithm
• Does not allow mutual-exclusion, hold-and-
wait, and no pre-emption conditions
• System guarantees that processes will be
allocated resources within finite time.

03/15/2024 COM 221 30


Virtual Operating system or Virtual machine.

• A virtual machine (VM) is a software implementation of a


machine (i.e. a computer) that executes programs like a
physical machine.
• Virtual machines are separated into two major categories,
based on their use and degree of correspondence to any
real machine.
• A system virtual machine provides a complete system
platform which supports the execution of a complete
operating system (OS).
• In contrast, a process virtual machine is designed to run a
single program, which means that it supports a single
process.
03/15/2024 COM 221 31
System virtual machines
• multiple OS environments can co-exist on the
same computer, in strong isolation from each
other
• the virtual machine can provide an instruction
set architecture (ISA) that is somewhat
different from that of the real machine
• application provisioning, maintenance, high
availability and disaster recovery

03/15/2024 COM 221 32


The main disadvantages of VMs are:
• a virtual machine is less efficient than a real
machine when it accesses the hardware
indirectly
• when multiple VMs are concurrently running
on the same physical host, each VM may
exhibit a varying and unstable performance
(Speed of Execution, and not results), which
highly depends on the workload imposed on
the system by other VMs, unless proper
techniques are used for temporal isolation
among virtual machines.
03/15/2024 COM 221 33
Kernel (computing)
• In computing, the kernel is the main component of most
computer operating systems; it is a bridge between
applications and the actual data processing done at the
hardware level. The kernel's responsibilities include
managing the system's resources (the communication
between hardware and software components).
• Usually as a basic component of an operating system, a
kernel can provide the lowest-level abstraction layer for the
resources (especially processors and I/O devices) that
application software must control to perform its function.
• It typically makes these facilities available to application
processes through inter-process communication
mechanisms and system calls (requests).
03/15/2024 COM 221 34
A kernel connects the application software to the hardware of a computer

03/15/2024 COM 221 35


Kernel basic facilities
• The kernel's primary function is to manage the
computer's resources and allow other programs
to run and use these resources. Typically, the
resources consist of:
• The Central Processing Unit. This is the most
central part of a computer system, responsible for
running or executing programs on it. The kernel
takes responsibility for deciding at any time which
of the many running programs should be
allocated to the processor or processors (each of
which can usually run only one program at a time)
03/15/2024 COM 221 36
• The computer's memory. Memory is used to store both
program instructions and data.
Typically, both need to be present in memory in order for a
program to execute.
Often multiple programs will want access to memory,
frequently demanding more memory than the computer has
available.
• Any Input/output (I/O) devices present in the computer, such
as keyboard, mouse, disk drives, printers, displays, etc.
The kernel allocates requests from applications to perform I/O
to an appropriate device (or subsection of a device, in the case
of files on a disk or windows on a display) and provides
convenient methods for using the device (typically abstracted
to the point where the application does not need to know
implementation details of the device).
03/15/2024 COM 221 37
Kernels also usually provide methods for synchronization (In
computer science, synchronization refers to one of two distinct
but related concepts:

synchronization of processes, and synchronization of data.


Process synchronization refers to the idea that multiple
processes are to join up or handshake at a certain point, so as to
reach an agreement or commit to a certain sequence of action.

Data synchronization refers to the idea of keeping multiple


copies of a dataset in coherence with one another, or to
maintain data integrity. Process synchronization primitives are
commonly used to implement data synchronization.) and
communication between processes (called inter-process
communication or IPC).
03/15/2024 COM 221 38
A kernel may implement these features itself,
or rely on some of the processes it runs to
provide the facilities to other processes,
although in this case it must provide some
means of IPC to allow processes to access the
facilities provided by each other.
Finally, a kernel must provide running
programs with a method to make requests to
access these facilities.

03/15/2024 COM 221 39


Thread (computing)

A thread of execution is the smallest unit of processing that


can be scheduled by an operating system. The
implementation of threads and processes differs from one
operating system to another, but in most cases, a thread is
contained inside a process.

Multiple threads can exist within the same process and share
resources such as memory, while different processes do not
share these resources. In particular, the threads of a process
share the latter's instructions (its code) and its context (the
values that its variables reference at any given moment).

03/15/2024 COM 221 40


On a single processor, multithreading generally occurs by
time-division multiplexing (as in multitasking): the
processor switches between different threads.

This context switching generally happens frequently


enough that the user perceives the threads or tasks as
running at the same time. On a multiprocessor (including
multi-core system), the threads or tasks will actually run
at the same time, with each processor or core running a
particular thread or task.

Programs can have user-space threads when threading


with timers, signals, or other methods to interrupt their
own execution, performing a sort of ad-hoc time-slicing.
03/15/2024 COM 221 41
03/15/2024 COM 221 42
How threads differ from processes
Threads differ from traditional multitasking operating
system processes in that:
• processes are typically independent, while threads exist
as subsets of a process
• processes carry considerably more state information
than threads, whereas multiple threads within a process
share process state as well as memory and other
resources
• processes have separate address spaces, whereas
threads share their address space
• processes interact only through system-provided inter-
process communication mechanisms
• Context switching between threads in the same process
is typically faster than context
03/15/2024 COM 221 switching between 43
Multithreading
Multithreading as a widespread programming and
execution model allows multiple threads to exist
within the context of a single process.

These threads share the process' resources but are


able to execute independently. The threaded
programming model provides developers with a
useful abstraction of concurrent execution.
However, perhaps the most interesting application
of the technology is when it is applied to a single
process to enable parallel execution on a
multiprocessor system.
03/15/2024 COM 221 44
• This advantage of a multithreaded program allows it to
operate faster on computer systems that have multiple
CPUs, CPUs with multiple cores, or across a cluster of
machines — because the threads of the program
naturally lend themselves to truly concurrent execution.
In such a case, the programmer needs to be careful to
avoid race conditions, and other non-intuitive behaviors.
In order for data to be correctly manipulated, threads
will often need to rendezvous (meet) in time in order to
process the data in the correct order. Threads may also
require mutually-exclusive operations in order to
prevent common data from being simultaneously
modified, or read while in the process of being modified.
Careless use of such primitives can lead to deadlocks.

03/15/2024 COM 221 45


• Another use of multithreading, applicable
even for single-CPU systems, is the ability for
an application to remain responsive to input.
In a single-threaded program, if the main
execution thread blocks on a long-running
task, the entire application can appear to
freeze. By moving such long-running tasks to a
worker thread that runs concurrently with the
main execution thread, it is possible for the
application to remain responsive to user input
while executing tasks in the background
03/15/2024 COM 221 46
Operating systems schedule threads in one of two ways:
• Preemptive multithreading is generally considered the
superior approach, as it allows the operating system to
determine when a context switch should occur. The
disadvantage to preemptive multithreading is that the
system may make a context switch at an inappropriate
time, causing lock convoy, priority inversion or other
negative effects which may be avoided by cooperative
multithreading.
• Cooperative multithreading, on the other hand, relies
on the threads themselves to relinquish control once
they are at a stopping point. This can create problems
if a thread is waiting for a resource to become
available.
03/15/2024 COM 221 47
Inter process communication (IPC)
A capability supported by some operating systems that
allows one process to communicate with another
process. The processes can be running on the same
computer or on different computers connected through
a network.

IPC enables one application to control another


application, and for several applications to share the
same data without interfering with one another. IPC is
required in all multiprocessing systems, but it is not
generally supported by single-process operating systems
such as DOS. OS/2 and MS-Windows support an IPC
mechanism called DDE .
03/15/2024 COM 221 48
The Windows operating system provides mechanisms
for facilitating communications and data sharing
between applications. Collectively, the activities
enabled by these mechanisms are called interprocess
communications (IPC).

Typically, applications can use IPC categorized as


clients or servers. A client is an application or a
process that requests a service from some other
application or process. A server is an application or a
process that responds to a client request. Many
applications act as both a client and a server,
depending on the situation.
03/15/2024 COM 221 49
The following IPC mechanisms are supported by
Windows:
• Clipboard
• Data Copy
• DDE
• File Mapping
• Mail slots
• Pipes
• RPC

03/15/2024 COM 221 50


Using the Clipboard for IPC
• The clipboard acts as a central depository for data
sharing among applications. When a user performs a
cut or copy operation in an application, the
application puts the selected data on the clipboard in
one or more standard or application-defined formats.
Any other application can then retrieve the data from
the clipboard, choosing from the available formats
that it understands. The clipboard is a very loosely
coupled exchange medium, where applications need
only agree on the data format. The applications can
reside on the same computer or on different
computers on a network.
03/15/2024 COM 221 51
Using OLE for IPC
• Applications that use OLE manage compound
documents—that is, documents made up of
data from a variety of different applications.
OLE provides services that make it easy for
applications to call on other applications for
data editing. For example, a word processor
that uses OLE could embed a graph from a
spreadsheet. The user could start the
spreadsheet automatically from within the
word processor by choosing the embedded
chart for editing.
03/15/2024 COM 221 52
Using Data Copy for IPC
• Data copy enables an application to send
information to another application using the
WM_COPYDATA message. This method
requires cooperation between the sending
application and the receiving application. The
receiving application must know the format of
the information and be able to identify the
sender. The sending application cannot modify
the memory referenced by any pointers.

03/15/2024 COM 221 53


Using DDE for IPC
• DDE is a protocol that enables applications to
exchange data in a variety of formats. Applications
can use DDE for one-time data exchanges or for
ongoing exchanges in which the applications update
one another as new data becomes available.
• The data formats used by DDE are the same as
those used by the clipboard. DDE can be thought of
as an extension of the clipboard mechanism. The
clipboard is almost always used for a one-time
response to a user command, such as choosing the
Paste command from a menu.

03/15/2024 COM 221 54


Using a File Mapping for IPC
• File mapping enables a process to treat the
contents of a file as if they were a block of
memory in the process's address space. The
process can use simple pointer operations to
examine and modify the contents of the file.
When two or more processes access the same
file mapping, each process receives a pointer
to memory in its own address space that it can
use to read or modify the contents of the file.

03/15/2024 COM 221 55


Using a Mailslot for IPC
• Mailslots provide one-way communication. Any
process that creates a mailslot is a mailslot
server. Other processes, called mailslot clients,
send messages to the mailslot server by writing
a message to its mailslot. Incoming messages
are always appended to the mailslot. The
mailslot saves the messages until the mailslot
server has read them. A process can be both a
mailslot server and a mailslot client, so two-way
communication is possible using multiple
mailslots.
03/15/2024 COM 221 56
• A mailslot client can send a message to a
mailslot on its local computer, to a mailslot on
another computer, or to all mailslots with the
same name on all computers in a specified
network domain.
• Messages broadcast to all mailslots on a
domain can not be no longer than 400 bytes,
whereas messages sent to a single mailslot are
limited only by the maximum message size
specified by the mailslot server when it
created the mailslot.
03/15/2024 COM 221 57
Using Pipes for IPC
• There are two types of pipes for two-way communication:
anonymous pipes and named pipes. Anonymous pipes
enable related processes to transfer information to each
other. Typically, an anonymous pipe is used for redirecting
the standard input or output of a child process so that it
can exchange data with its parent process.
• To exchange data in both directions (duplex operation), you
must create two anonymous pipes. The parent process
writes data to one pipe using its write handle, while the
child process reads the data from that pipe using its read
handle. Similarly, the child process writes data to the other
pipe and the parent process reads from it. Anonymous
pipes cannot be used over a network, nor can they be used
between unrelated processes.
03/15/2024 COM 221 58
• Named pipes are used to transfer data between
processes that are not related processes and
between processes on different computers.
Typically, a named-pipe server process creates a
named pipe with a well-known name or a name
that is to be communicated to its clients.
• A named-pipe client process that knows the name
of the pipe can open its other end, subject to access
restrictions specified by named-pipe server process.
• After both the server and client have connected to
the pipe, they can exchange data by performing
read and write operations on the pipe.
03/15/2024 COM 221 59
Using RPC for IPC
• RPC enables applications to call functions remotely.
Therefore, RPC makes IPC as easy as calling a function.
RPC operates between processes on different
computers on a network.
• The RPC provided by Windows is compliant with the
Open Software Foundation (OSF) and Distributed
Computing Environment (DCE). This means that
applications that use RPC are able to communicate
with applications running with other operating systems
that support DCE. RPC automatically supports data
conversion to account for different hardware
architectures and for byte-ordering between dissimilar
environments.
03/15/2024 COM 221 60
Developments in Operating System

• Early computers were (physically) large machines, and they


were being operated by programmers. First the program would
be loaded normally into memory from the front panel
switches. The appropriate buttons would be pushed to set the
starting address and to start the execution of the program. As
the program run, the programmer/operator could monitor its
execution by the display of lights. If errors were discovered, the
programmer could have the program, examine the contents of
memory and registers and deplug the program directly.

• As time went on, additional software and hardware were


developed.

03/15/2024 COM 221 61


Developments in the O.S include the
following;
• Single Batch Systems
• Off-line Processing
• Spooling
• Multi-programmed Batched Systems
• Time Sharing
• Distributed System

03/15/2024 COM 221 62


Single Batch Systems

While tapes were being mounted, or the


programmer was operating, the CPU set idle.
Remember computer time was very expensive
owners wanted computers to be used as much
as possible. They needed high utilization to get
as much as they could from their investments.

03/15/2024 COM 221 63


Off-line Processing

In a way to save time, methods like off line


processing were developed. Here devices like
printers could be operated off line rather than
by the main computer. The main advantage of
off-line operations was the main computer
was no longer constrained by the speed of the
printers.

03/15/2024 COM 221 64


Spooling
• Off line processing was later replaced by spooling.
Spooling uses the disk as a very large buffer for reading
as much as possible on the input device and for solving
output files until the output devices are able to accept
them.
• Spooling overlaps the I/O of are job with the
computation of other jobs even in simple, the spooler
may be reading the input of the job while printing the
output of the different job.
• Spooling has a direct beneficial effect on the
performance of the system. Spooling can keep both the
CPU and the I/O devices working at much higher rates.
03/15/2024 COM 221 65
Multi-programmed Batched Systems
• Spooling provides an important data structure
called job pool. Spooling will result in several
jobs that have already been read waiting on
disk, ready to run. A pool of jobs on disk
allows the operating system to select which
job to run next.
• If several jobs are ready to run at the same
time, the system must choose among them.
This decision is called CPU scheduling.

03/15/2024 COM 221 66


Time Sharing
• Multi-programmed batched system provide an
environment where various system resources. (For
example CPU, memory, peripheral devices) are
utilized effectively.
• Time sharing (or multi-sharing) is a logical extension
of multi-programming. Multiple jobs are executed by
the CPU switching between them.
• Time-sharing systems were developed to provide
interactive use of a computer system at a reasonable
cost. If time-shared O.S uses the CPU scheduling to
provide each user with a small portion of a time-
shared computer.
03/15/2024 COM 221 67
Distributed System
• The trend in computer system is to distribute
computation among several processors. The
processors do not share memory or clock instead
each processor has its own local memory.
• The processors communicate with one another
through various communication lines, such as
high-speed buses or telephone lines.
• The processors in a distributed system may vary in
size and function. These processors are referred to
by a number of different names such as sites,
nodes, computers, e.t.c.
03/15/2024 COM 221 68
Reasons for building distributed systems
• Resource Sharing
• Computation Speed Up
• Reliability
• Communication

03/15/2024 COM 221 69


Operating System Services
1. Program Execution
2. I/O Operations
3. File System Manipulation
4. Communications
5. Error Detection
6. Process management
7. Main-Memory Management
8. User Interface
9. Job Management
10. Task Management
11. Data Management
12. Device Management
13. Security

03/15/2024 COM 221 70


Program Execution
The system must be able to load a program into
memory and to run it. The program must be able to
end its execution either normally or abnormally
(indicating error).
I/O Operations
• A running program may require I/O.
• This I/O may involve a file or an I/O device. For
efficiency and protection users cannot control
I/O devices directly. Therefore the O.S must
provide sure means to do I/O.
03/15/2024 COM 221 71
File System Manipulation
• The file system is of particular interest. It should be
obvious that programs need to read and write files.
They also need to create and delete files by name.
Communications
• There are many circumstances in which one process
needs to exchange information with another process.
• There are two major ways in which such
communication can occur. The first takes place between
processes executing on the same computer. The second
takes place between processes executing on different
computer systems that are tied together by a computer
network.
03/15/2024 COM 221 72
Error Detection
• The O.S constantly needs to be aware of the
possible errors. Errors may occur in the CPU
and memory hardware (such as memory error
and power failure), in I/O devices (such as lack
of paper in the printer. )
• For each type of error, the O.S should take the
appropriate action to ensure correct and
consistent computing.

03/15/2024 COM 221 73


Process management
The O.S is responsible for the following activities
in connection with process management.
– The creation and deletion of both user and system
processes.
– The suspension and resumption of processes.
– The provision of mechanisms for processes
synchronization.
– The provision of mechanisms for process
communication.
– The provision of mechanism of deadlock handling.

03/15/2024 COM 221 74


Main-Memory Management
Main memory is the only storage device that the CPU is
able to address directly. For example for the CPU to
process data from disk, the data must first be transferred
to main memory by CPU generating I/O calls / requests.

The operating system is responsible for the following


activities in connection with memory management;
1. Keep track of which parts of memory are currently being
used and by whom.
2. Decide which processes are to be loaded into memory
when memory space becomes available.
3. Allocate and de-allocate memory space as needed.

03/15/2024 COM 221 75


User Interface
All graphics based today, the user interface includes the
windows, menus and method of interaction between you and
the computer. Prior to graphical user interfaces (GUIs), all
operations of the computer were performed by typing in
commands.

Not at all extinct, command-line interfaces are alive and well


and provide an alternate way of running programs on all major
operating systems.

Operating systems may support optional interfaces, both


graphical and command line. Although the overwhelming
majority of people work with the default interfaces, different
"shells" offer variations of appearance and functionality.

03/15/2024 COM 221 76


Task Management
• Multitasking, which is the ability to simultaneously
execute multiple programs, is available in all
operating systems today.
• Critical in the mainframe and server environment,
applications can be prioritized to run faster or
slower depending on their purpose.
• In the desktop world, multitasking is necessary for
keeping several applications open at the same time
so you can bounce back and forth among them.

03/15/2024 COM 221 77


Data Management
• Data management keeps track of the data on
magnetic disks, magnetic tapes and optical storage
devices. The application program deals with data by
file name and a particular location within the file.
• The operating system's file system knows where that
data are physically stored (which sectors on disk) and
interaction between the application and operating
system is through the programming interface.
• Whenever an application needs to read or write data,
it makes a call to the operating system.

03/15/2024 COM 221 78


Device Management
• Device management controls peripheral
devices by sending them commands in their
own proprietary language.
• The software routine that knows how to deal
with each device is called a "driver," and the
OS requires drivers for the peripherals
attached to the computer.
• When a new peripheral is added, that device's
driver is installed into the operating system.

03/15/2024 COM 221 79


Security
• Operating systems provide password
protection to keep unauthorized users out of
the system.
• Some operating systems also maintain activity
logs and accounting of the user's time for
billing purposes.
• They also provide backup and recovery
routines for starting over in the event of a
system failure.

03/15/2024 COM 221 80


File Management
• File management is one of the most visible
components of an O.S. Computers can store
information on several different types of
physical media. Magnetic tape, magnetic disk
and optical disks are the most common media
for string information.
• Each of these media has its own characteristics
and physical organization. Each media is
controlled by a device such as disk drive or
tape drive with its own unique characteristics.
03/15/2024 COM 221 81
The operating system is responsible for the
following activities in connection with file
management.
– The creation and deletion of files.
– The creation and deletion of directories
– The support of primitives for manipulating files
and directives.
– The mapping of files onto secondary storage the
backup of files unstable (nominative) storage
media.

03/15/2024 COM 221 82


Job Management
• Job management controls the order and time
in which programs are run and is more
sophisticated in the mainframe environment
where scheduling the daily work has always
been a routine.
• In a desktop environment, batch files can be
written to perform a sequence of operations
that can be scheduled to start at a given time
e.g. scheduled backing up of data.

03/15/2024 COM 221 83


PROCESSES
• Early computer systems allowed only one
program to be executed at a time. This program
had complete control of the system, and had
access to all of the systems resources,
current-day computer systems allow multiple
programs to be loaded into memory and to be
executed concurrently.
• These needs resulted in the notation of a process
which is a program in execution. A process is the
unit of work in a modern time-sharing system.

03/15/2024 COM 221 84


Process State
As a process executes, it changes state.
The state of a process is defined in part by the
current activity of that process. Each process may
be in one of the following states;
1. New; The process is being created initiated.
2. Ready; The process is waiting to be assigned to a
processor.
3. Running; Instructions are being executed.
4. Waiting; The process is waiting for some event to
occur (such as an I/O) completion or reception of a
signal).
5. Terminated; The process has finished execution.
03/15/2024 COM 221 85
Diagram of process state

03/15/2024 COM 221 86


A process or task is a portion of a program in some
stage of execution. A program can consist of
several tasks, each working on their own or as a
unit (perhaps periodically communicating with
each other).

Each process that runs in an operating system is


assigned a process control block that holds
information about the process, such as a unique
process ID (a number used to identify the
process), the saved state of the process, the
process priority and where it is located in memory.
03/15/2024 COM 221 87
What about process states?
A process in a computer system may be in one
of a number of different possible states, such
as
ready - if it can run when the processor
becomes free
running - it currently has the processor
blocked - it cannot run when the processor
becomes free

03/15/2024 COM 221 88


When a running process is interrupted by the processor
after completing its allocated time, its state is saved in its
process control block, its process state changes to ready and
its priority adjusted.

When a running process accesses an input or output device,


or for some reason cannot continue, it is interrupted by the
processor, the process state and associated data is saved in
the associated process control block. The process state is
changed to blocked and the priority adjusted.

When the scheduler decides the next task to run, it changes


the process state of the selected process to running and
loads the saved data associated with that process back into
the processor.
03/15/2024 COM 221 89
Typically, an operating system will provide a
number of program function calls that can be
used to control processes. These are similar to
those shown below,
block
wakeup
suspend
sleep
change_priority

03/15/2024 COM 221 90


What is a process control block?

A process control block or PCB is a data


structure (a table) that holds information
about a process. Every process or program
that runs needs a PCB. When a user requests
to run a particular program, the operating
system constructs a process control block for
that program.

03/15/2024 COM 221 91


What is a background and foreground process?

Multi-tasking systems support foreground and


background processes (tasks). A foreground task
is one that the user interacts directly with using
the keyboard and screen. A background task is
one that runs in the background (it does not
have access to the screen or keyboard).
Background tasks are usually used for printing.
Windows NT Workstation and Windows 95/98
assign a higher priority to foreground tasks.

03/15/2024 COM 221 92


CPU SCHEDULING

CPU scheduling is the basis of multi-


programmed operating system. By switching
the CPU among processes, the O.S can make
the computer more productive.

03/15/2024 COM 221 93


Basic Concepts
The objective of multi-programming is to have
some process running at all times to maximize
CPU utilization. For a single processor system,
these will never be more than one running
process, if there are more processes the rest
will have to wait until the CPU is free and can
be rescheduled.

03/15/2024 COM 221 94


Preemptive Scheduling
CPU scheduling decisions may take place under the
following four circumstances;
1. When a process switches from the running state to
the waiting state (for example I/O request, or
invocation of wait for the termination of the child
processes.)
2. When a process switches from the running state to
the ready state (for example, when an interruption
occurs).
3. When a process switches from the waiting state to
the ready state (for example, completion of I/O).
4. When a process terminates.
03/15/2024 COM 221 95
• For circumstances 1 and 4 there is no choice in terms
of scheduling. A new process (if one exists in the
ready queue) must be selected for execution. There
is a choice however for circumstances 2 and 3. When
scheduling takes place only under circumstances 1
and 4, we say the scheduling scheme is non
preemptive; adhesive the scheduling scheme is
preemptive. Under non-preemptive scheduling, once
the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by
termination or by switching to the waiting state.
• This scheduling method issued by Microsoft
windows environment.
03/15/2024 COM 221 96
Scheduling Criteria

Different CPU scheduling algorithms have different


properties and may favor one class of processes over
another.

Criteria for determining the best algorithms include the


following;

CPU utilization

We want to keep the CPU as busy as possible, in a real


system CPU utilization should range between 40% – 90%.
03/15/2024 COM 221 97
Throughout
If the CPU is busy, then work is being done, one measure of
work is the number of processes that can be completed per
time unit called throughout. For long processes, this rate
may be one process per hour, for short transactions, the
throughout might be 10 processes per second.

Turnaround time
From the point of view of a particular process, the important
criteria is how long it takes to execute that process. The
interval from the time of submission to the time of
completion is the turnaround time/turnaround time is the
sum of the periods spent waiting to get into memory waiting
in the ready queue, excluding on the CPU and doing I/O.

03/15/2024 COM 221 98


Waiting Time
The CPU scheduling algorithm does not affect the amount of time
during a process executes or does I/O, it affects only the amount
of time that a process spends waiting in the ready queue.

Response time
In an interactive system turn-around time may not be the best
criteria, often a process can produce some out put fairly early,
and continue computing new results while previous results are
being output to the user. Thus, another measure is the time from
the submission of a request until the first response is produced.
This measure called response time is the amount of time it takes
to start responding, but not time that it takes to output that
response. The turn around time is generally limited by the speed
of the output device. It is desirable to maximize CPU utilization
and throughout and to minimize turn around time, waiting time
and response time.
03/15/2024 COM 221 99
Scheduling Algorithms
CPU scheduling deals with the problem of deciding
which of the processes in the ready queue is to be
allocated the CPU, there are many different CPU
scheduling algorithms. Let us describe some of them.

1. First come first served (FCFS)


The simplest CPU scheduling algorithm is the first-
come, first serve scheduling (FCFS) algorithm.
With this scheme, the process that requests the CPU
first is allocated to the CPU first. The implementation
of the FCFS policy is easily managed with FIFO queue.
03/15/2024 COM 221 100
2. Shortest Job First Scheduling (SJF)
This algorithm associates with each process the
length of the letters next CPU burst. When the CPU is
available, it is assigned to the process that has the
smallest next CPU burst. If two processes have the
same length, next CPU burst, FCFS scheduling is used
to break the tie.

3. Priority Scheduling
A priority is associated with each process and the CPU
is allocated to the process with highest priority. Equal
priority processes are scheduled in FCFS order.

03/15/2024 COM 221 101


FILE SYSTEM INTERFACE
File Concept
Computers can store information on several different storage
media such as;
• Magnetic disks
• Magnetic tapes
• Optical disks

The operating system O.S abstracts from the physical


properties of its storage device to define a logical storage unit
which is the file.

These storage devices are usually non volatile so contents are


persistent through power failures and systems reboots.
03/15/2024 COM 221 102
Definition:
A file is a named collection of related information
that is recorded in secondary storage.
In general a file is a sequence of bits, bytes, lines,
or records whose meaning is defined by the file’s
creator and user.
A file has a certain defined structure according to
its type.
 A text file is a sequence of characters
organized into lines.
 A source / program file is a sequence of
subroutines and functions.
03/15/2024 COM 221 103
File Attributes:

1. Name; The symbolic file name is the only


information kept in human readable form.
2. Type; This information is needed for those
systems that support different types.
3. Location; This information is a pointer to a device
and to the location of the file on that device.
4. Size; The current size of file.
5. Protection; Access-control information i.e. who
can do reading, writing, executing.
6. Time, date and user identification
03/15/2024 COM 221 104
File Operations

A file is an abstract data type.


To define a file properly, we need to consider the
operations that can be performed on files.
The O.S provides system calls to create, write, read,
reposition, delete and truncating.

Creating a file
Two stages are necessary to create a file;
• Space in the file system must be found for the file.
• An entry for the new file must be made in the directory.
The directory entry codes the name of the file and
location in the file system.
03/15/2024 COM 221 105
Reading a file; to read from a file, we use a
system call that specifies the name of the file
and where (if necessary) the next block of the
file should be found. The system needs to
keep a read pointer to the location in the file
where the next read is to take place.

Repositioning within a file


The directory is searched for the right entry,
and the current file system is set to a given
value. This does not need to involve I/O.
03/15/2024 COM 221 106
Deleting a file; to delete a file, we search the
directory for the named file.
Having found the directory, we release all file
space and erase the directory entry.

Truncating a file
When a user wants the attributes of a file to
remain the same, but wants to erase contents of
the file, rather than forcing the user to delete the
file and then create it, this function allows
attributes to remain unchanged (except for the file
length) but for the file to be reset to length zero.
03/15/2024 COM 221 107
A common technique for implementing file types is to
include the type as part of the file name.
The name is spilt into two parts;
• A name and extension
Usually separated by a character or period.
In this way the user of a file and O.S can tell from the
name alone what type of a file is. e.g. in Ms. Word.
Name consists up to 8 characters followed by a period and
terminated by an up-to- three/four character extensions.
The system uses the extension to indicate the type of the
file and the type of operations that can be done on the
file. Only a file with com, exe and bat extensions can be
executed.
03/15/2024 COM 221 108
Access Methods
File storage devices store information when it is used, this information
must be accessed and read into computer memory. There are several
ways that the information in the file can be accessed.

1. Sequential Access
This is the simplest information in the file is processed in order, one
record after the other. e.g. editors and computers usually access files
in this fashion.
The bulk of the operation on a file one reads and writes.

2. Direct Access (Random Access)


A file is made up of fixed length logical records that allows programs
to read and write records rapidly in no particular order. The direct
access method is based on a disk model of a file, since disks allow
random access to any file block.
03/15/2024 COM 221 109
3. Other Access Methods
Other access methods can be built on top of a
direct-access method. These individual methods
generally involve construction of an index for a file.
The index, like a book index catalogue points to
various blocks. To find an entry in a file, one first
searches the index and then use the pointer to
access the file directly.

03/15/2024 COM 221 110


Directory Structure

Operations performed on a directory;


• Search for a file; we need to be able to search a directory
structure to find the entry for a particular file.

• Create a file; new files need to be created and added to the


directory.

• Delete a file; when a file is no longer needed we can remove it


from directory.

• List a Directory; we need to be able to list the files in a directory,


and the contents of the directory entry for each file in the list.

03/15/2024 COM 221 111


Directory Structure cont.
• Rename a file; because the name of a file represents its
contents to its users, the name must be changeable when
the contents or use of the file changes. Renaming a file may
also allow its position within the directory structure to be
changed.

• Traverse the file system; it is useful to be able to access


every directory, and every file within a directory structure.
For reliability, it is a good idea to sum the contents a
structure of the entire file system at regular intervals. This
saving often consists of copying all files to magnetic tape.
This technique provides a back up copy in cases of system
failures or if the file is no longer in use.
03/15/2024 COM 221 112
Schemes for defining the logical structure of a directory

1. Single – Level Directory


This is the simplest directory structure.
All files are contained in the same directory.
Directory Files

Limitations
1. Since all files are in the same directory they must
have unique names
2. As the files increase it becomes difficult to remember
the names of all the files.
03/15/2024 COM 221 113
2. Two Level Directory
The major disadvantage to a single level directory is the
configuration of file names between different users. The
standard solution is to create a separate directory for each
user.
In two-level directory structure, each user has her own user
file directory.
The identification of a directory is indexed by user name or
account number.
When a user logs in, the system's master file directory is
searched.
When a user refers to a particular file, only his own (UFD) user
file directory is searched.
These different users may have files with same name as long
as they are in different directories.
03/15/2024 COM 221 114
The user directories themselves must be
created and deleted as necessary

03/15/2024 COM 221 115


3. Tree Structured Directories
• A two-level directory is a two level tree the
natural generalization is to extend the
directory structure to a tree of any height.
• The tree has a root.
• Every fill in the system has a unique path, a
path name is the path from the root, through
all directories to a specified file.

03/15/2024 COM 221 116


03/15/2024 COM 221 117
4. A Cyclic – Graph Directories

03/15/2024 COM 221 118


• A tree structure prohibits the sharing of files
or directories. Cyclic graph allows directories
to be shared by sub directories and files
• Some files or sub directories may have two
different directories.
• A cyclic graph is a natural generalization of a
tree-structured directory

03/15/2024 COM 221 119


5.General Graph Directory

03/15/2024 COM 221 120


Protection

• When information is kept in a computer system, a


major concern is its protection from physical damage
(reliability and improper access.)

• Reliability is provided by duplicate copies of files many


computers have system program that automatically
copy disk files to tapes at regular intervals (once per
day, or week, a month) to maintain a copy should C file
system be accidentally destroyed.

• Protection can be provided in many ways. For small


single user systems, e.g. flash disk, remove and lock it
in wardrobe.
03/15/2024 COM 221 121
Several different operations may be
controlled;

 Read; read from file


 Write; write or rewrite the file.
 Execute; locate the file into memory and
execute it.
 Append; write new information on and out
of file
 Delete; delete the file and free its space for
possible reuse.
03/15/2024 COM 221 122
Access lists and groups

The most common approach to the protection problem is


to make access depending on the identity of the user.
Various users may need different types of access to a file
or directory.

Many systems recognize three classifications of users in


connection with the following;
 Owner; the user who created the file.
 Group; A set of users who are sharing the file and
need similar access is a group or work group.
 Universe; All other users in the system constitutes
the universe.
03/15/2024 COM 221 123
For example
 Sana is writing a book and employs John & Jim to help.
 Sana should be able to invoke all operations on the file.
 John & Jim should be able only to read and write the file, they
should not be allowed to delete.
 All other users should be able only to read the file and give
comments.

Protection refers to a mechanism for controlling the access to


programs, processes or users to the resources defined by a computer
system. intentions for providing protection are; preventing malicious,
internal violation of an access restricted by a user.

Protection can improve reliability by detecting latent errors at the


interfaces between components subsystems.

A protect-oriented system provides means to distinguish between


authorized and unauthorized usage.
03/15/2024 COM 221 124
Domain of Protection
A computer system is a collection of processes and
objects. By objects we mean both hardware
objects of CPU, memory, pointers to drives, disks
and software of file programs.

Domain structure
Each domain defines a set of objects and the type
of operations that may be invoked on each object.
The ability to execute an operation on collection of
access rights, each of which is an ordered path <
object name, rights etc>
03/15/2024 COM 221 125
Domains can share access rights.

We have three Domains, D1, D2 & D3. The access


right <Q4 {print}> is shared by both D3 and D2.
Implying that a process executing in either one of
these two domains can print object Q4, note that a
process must be executing in domain D1 to read and
write object Q1.
03/15/2024 COM 221 126
A domain can be realized in a variety of ways.
 Each user may be a domain; in this case the set of objects
that can be accessed depends on the identity of the user.
Domain switching occurs.
When the user is changed that is generally when one user
logs out and another user logs in.
 Each process may be a domain; in this case the set of
objects that can be accessed depending on the identity of
the process. Domain switching corresponds to one process
sending a message to another process and then waiting for
a response.
 Each procedure may be a domain; in this case the set of
objects that can be accessed corresponds to the local
variable defined within the procedure. Domain switching
occurs when a procedure call is made.
03/15/2024 COM 221 127
Access Matrix

Our model of protection can be viewed abstractly as a matrix,


called an access matrix.
• The row of the access matrix represents domains, and the
columns represent objects.
• Each entry in the matrix consists of a set of a access rights.
Because objects are defined explicitly by columns, we can omit
the object name from access rights. The entry access (i,j)
defines the set of operations that a process, executing in
domain Di can invoke on object Dj

03/15/2024 COM 221 128


FILE SYSTEM IMPLEMENTATION

File System Structure


Disks provide the bulk of secondary storage on
which a file system is maintained.
To improve I/O efficiency, I/O transfers
between memory and disks, operations are
performed in units of blocks. Each block is one
or more sectors.

03/15/2024 COM 221 129


Disks have two important characteristics that
make them a convenient media for storing
multiple files;
1. They can be rewritten in pieces, it is possible
to read a block from a disk, to modify the
block, and to write it back into the same place.
2. We can access directly any given block of
information on the disk. Thus it is simple to
access any file either sequentially or randomly
and switching from one file to another
requires only moving the read-write heads and
waiting for the disk to revolve.
03/15/2024 COM 221 130
File – System Monitoring
Just as a file must be opened before it is used a file
system must be mounted before available to
processes on the system.
e.g. /home /user /home/Jane

Allocation Methods
The direct access nature of disks allows us flexibility
in the implementation of files.
Many files are stored on the same disk.
The main problem is how to allocate space to these
files so that disk space is effectively utilized.
03/15/2024 COM 221 131
The three major methods of allocating disks space
are;
1. Contiguous Allocations
The contiguous allocation method requires each
file to occupy a set of contiguous blocks on the
disk.
Contiguous allocation of a file is defined by the disk
address and length (in block units) of first block if
the file is n blocks long and starts at location b,
then it occupies blocks b, b+l, b+z……….. b+n-1.

03/15/2024 COM 221 132


The directory entry for each file indicates the
address of the starting block and length of the
area allocated for this file.

03/15/2024 COM 221 133


Continuous Allocation of disk space
For direct access to block i of c file that starts at block b we can
immediately access block b+c. Thus both sequential and direct
access can be supported by contiguous allocation.

Problem with Contiguous Allocating


Major problem is determining how much space in needed for a
file, when a file is being created the total amount of space it will
need must be found and allocated.
We allocate too little space to a file. We may find that the file
cannot be extended we have two possibilities.
First the user program can be terminated with error message. The
user must then allocate more space and on the program again.
These repeated runs may be costly.
To prevent this, the user will over estimate resulting in waste of
space.
03/15/2024 COM 221 134
2. Linked Allocation

Linked allocation solves all problems of contiguous


allocation. Here each file is a linked list of disk blocks,
the disk blocks can be scattered any where on the
disk.
03/15/2024 COM 221 135
The directory contains a pointer to the first and last blocks of
the file. e.g. a file of five blocks might start at block 9, continue
to block 16, then block 1 then block 2 and finally block 25.

Each block contains a pointer to the next block. Thus if each


block is 512 bytes and a disk address (the pointer) requires 4
bytes then the user sees blocks of 508 bytes.

To create a new file, we simply create a new entry in the


directory. With linked allocation, each directory entry has a
pointer to the first disk block of the file.

No need to declare the size of file on creation.

A file can continue to grow asCOM


03/15/2024
long
221
as there is a free block. 136
Disadvantages
• It can only be used effectively for only
sequential access files.
• To find the ith block of a file we must start at the
beginning of that file and follow the pointer
until block i.
• Space required for pointer (4bytes)
• Each file requires slightly more space that
would be required.
• Reliability, if one block is lost you cant trace the
pointer to the next block.
03/15/2024 COM 221 137
3. Indexed Allocation
Linked allocation solves the external – fragmentation
and size – declaration problem of contiguous
allocation. However, linked allocation cannot
support efficient direct access since the pointer to
the blocks are scattered with blocks themselves all
over the disk and need to be retrieved in order.

Indexed allocation solves this problem by bringing


together all the pointers into one location; the index
block.

03/15/2024 COM 221 138


Each file has its own index block, which is on
array of disk block addressed.
The ith entry in the Index block points to the ith
block of the read the ith block.

03/15/2024 COM 221 139


FREE SPACE MANAGEMENT

Since there is only a limited amount of disk space, it is


necessary to reuse the space from deleted files for new
files if possible.
To keep track of free disk space, the system maintains a
free-space list. The free space list records all disk blocks
that are free, those not allocated to some file or
directory.
To create a file we search the free space list for the
required amount of space and allocate that space to the
new file. This space is then removed from the free–
space list. When a file is deleted, its disk space is added
to the free space list.
03/15/2024 COM 221 140
Methods for free space management / allocation.
1. Bit Vector
Frequently the free space list is implemented as a bit vector.
Each block is represented by 1 bit if the block is free, the bit is 1;
if the block is allocated the bit is zero e.g. consider where blocks
2, 3, 4, 5, 8, 9 are free and the rest are allocated. The free space
bit map would be 001110011
The main advantage of this approach is that it is relatively simple
and efficient to find the first free block or consecutive free
blocks.

2. Linked List
Another approach is to link together all the free disk blocks
keeping a pointer to the first free block. This first block contains
a pointer to the next free disk block and so on.
03/15/2024 COM 221 141
Free-space list head

A modification of the free list approach is to store the


addresses of n free blocks. The first n-1 of these blocks
are free. The cost block contains the addresses of another
n-free block and so on. Advantage is that address of a
large number of free blocks can quickly be found.
03/15/2024 COM 221 142
3.Counting
This takes advantage that several contiguous
blocks may be freed simultaneously,
particularly when space is allocated with the
contiguous allocation or clustering. Thus rather
than keeping a list of n free disk address, we
can keep the addresses of the first block and
the number of n free contiguous blocks that
follow the first block.

03/15/2024 COM 221 143


Directory Implementation

The selection of directory allocation and directory management


algorithms has a large effect on the efficiency, performance and
reliability of the file system.

Methods of Directory Implementation


1. Linear List
A linear list of directory entries requires a linear search to find
a particular entry.
To create a new file, we must first search the directory to be
sure no existing file has the same name.
Then we add a new entry and the end of the directory. To
delete a file we search the directory for the named file then
release the space allocated to it.
03/15/2024 COM 221 144
2.Harsh table
In this method, a linear list stores the
directory entries but a hash data structure is
also used. The hash table takes a value
computed from the file and name and returns
a pointer to the file name in the linear list.
Therefore it can greatly reduce the directory
search time.

03/15/2024 COM 221 145


Swapping
A process needs to be in memory to be executed. A process, however,
can be swapped temporarily out of memory to a backing store and
then brought back into memory for continued execution. For example
like in a multi-programming environment with a round-robin CPU-
scheduling algorithm. When a quantum expires, the memory manager
will start to swap out the process that has just finished and swap in
another process to the memory space that has been freed.

Swapping of two processes using a disk as a backing store.

03/15/2024 COM 221 146


Swapping cont.
A variant of this swapping policy is used for priority
based scheduling algorithms. If a higher priority
process arrives and wants service, the memory
manager can swap out the lower-priority process so
that it can load and execute the higher-priority
process.
When the higher-priority process finishes, the lower-
priority process can be swapped back and continued.
Normally in the same memory space that it occupied
previously. This restriction is dictated by the method
of address binding. Swapping requires a backing
store.
03/15/2024 COM 221 147

You might also like