0% found this document useful (0 votes)
28 views

Operating System: Concepts in A Simplified Way

Uploaded by

tg321236
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

Operating System: Concepts in A Simplified Way

Uploaded by

tg321236
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 86

OPERATING SYSTEM

Concepts in a simplified way!

Compiled by
Dr Arokia Paul Rajan R
Associate Professor, CHRIST (Deemed to be University)

Dr Gobi Ramasamy
Associate Professor, CHRIST (Deemed to be University)
Operating System

PREFACE

Operating System: Concepts in a Simplified Way aims to make the complex principles of
operating systems accessible. This book breaks down core topics like process management,
memory organization, file systems, and security into clear, concise explanations, supported
by real-world examples and diagrams.

Each chapter builds sequentially, combining theory with practical insights to help readers
understand and apply key concepts. Designed for students, educators, and enthusiasts, this
book provides a straightforward guide to the essential workings of operating systems,
making a challenging subject approachable and engaging.

This book is intended to cover the syllabus of Operating Systems for the BCA students.

Special thanks to 2nd year BCA (Section A) students of 2024-25 batch of Christ University.

Dr R Arokia Paul Rajan & Dr Gobi Ramasamy

CHRIST (Deemed to be University) 2


Operating System

Chapter Topics Page number

1 Basic concepts of Operating system 4

2 Process Management 22

3 Deadlock & Memory Management 44

4 Memory Management & Files 59

5 Disk Management & Files implementation 77

CHRIST (Deemed to be University) 3


Operating System

CHAPTER 1: BASIC CONCEPTS OF OPERATING SYSTEM

WHAT ARE THE COMPONENTS OF THE COMPUTER SYSTEM?

A computer system is composed of essential hardware and software components that work together
to perform computing tasks. The hardware includes the Central Processing Unit (CPU), which acts as
the brain of the computer, executing instructions and processing data. Memory components such as
RAM (Random Access Memory) and ROM (Read-Only Memory) store data temporarily and
permanently, respectively, while storage devices like Hard Drives (HDDs) and Solid-State Drives
(SSDs) provide long-term data storage.
Input devices like keyboards and mice allow users to interact with the system, while output devices
such as monitors and printers display results. The motherboard connects all these components, and
the Power Supply Unit (PSU) ensures they receive the necessary power. Additionally, the Graphics
Processing Unit (GPU) accelerates image rendering, and cooling systems manage heat generated by
these components.

On the software side, the Operating System (OS) manages hardware resources, provides a
user interface, and supports running applications. Device drivers enable communication
between the OS and hardware peripherals, while application software like word processors
and web browsers performs specific user tasks. Firmware, stored on ROM, controls
hardware functions and assists in booting the computer. Utilities are system management
tools that optimize performance and maintain the system’s health. Together, these hardware

CHRIST (Deemed to be University) 4


Operating System

and software components form a cohesive system that performs various computing
functions.

EXPLAIN ABOUT THE COMPUTER SYSTEM ORGANIZATION

Computer-System Organization refers to the structure and interaction of a computer's


hardware components, which work together to perform computing tasks. It includes:

1. Central Processing Unit (CPU): The "brain" of the computer that executes
instructions from programs.
2. Memory: Stores data and instructions temporarily (RAM) or permanently (storage
devices like SSDs and HDDs).
3. Input/Output Devices (I/O): Allows interaction with the computer (e.g., keyboard,
mouse, display).
4. Bus: A communication system that transfers data between components.
5. System Software: The operating system and other software that manage hardware
and enable application execution.

This organization ensures that the hardware components function cohesively to perform
computing tasks efficiently.

WHAT IS AN OPERATING SYSTEM?


An operating system (OS) is system software that manages computer hardware and software
resources and provides common services for computer programs. It acts as an intermediary
between users and the computer hardware, enabling the execution of applications, managing
file systems, and coordinating tasks such as input/output operations, memory allocation,
and process scheduling.

CHRIST (Deemed to be University) 5


Operating System

LIST A FEW IMPORTANT OPERATING SYSTEMS.


• Windows: Developed by Microsoft for personal and enterprise use. Examples:
Windows 10, Windows 11.
• macOS: Developed by Apple for Macintosh computers. Examples: macOS Ventura,
macOS Monterey.
• Linux: Open-source OS used for various devices and servers. Examples: Ubuntu,
Fedora.
• Unix: Powerful, multiuser OS used in servers and workstations. Examples: AIX,
Solaris.
• Android: Open-source OS for mobile devices, developed by Google. Examples:
Android 12, Android 13.
• iOS: Developed by Apple for mobile devices, known for seamless integration with
Apple products. Examples: iOS 16, iOS 15.
• Chrome OS: Developed by Google for Chromebooks, optimized for web applications.
Examples: Chrome OS 110.
• Embedded OS: Specialized OS for embedded systems and devices. Examples: RTOS,
QNX.
• Server OS: Designed for server environments, focusing on network and resource
management. Examples: Windows Server, Ubuntu Server.

WHAT ARE THE GOALS OF AN OS? or EXPLAIN THE VARIOUS OPERATIONS OF THE
OPERATING SYSTEM

CHRIST (Deemed to be University) 6


Operating System

The core functions (goals) that an operating system performs to manage and control a
computer system are as follows:

1. Process Management: Handles process creation, scheduling, execution, and


termination.
2. Memory Management: Manages allocation and deallocation of memory, including
virtual memory.
3. File System Management: Organizes, stores, and retrieves data on storage devices.
4. Device Management: Controls and coordinates hardware devices through device
drivers.
5. User Interface Management: Provides interfaces for user interaction, such as
command-line or graphical user interfaces.
6. Security and Access Control: Enforces user authentication and access permissions
to protect data and resources.

These operations ensure efficient, secure, and reliable system performance.

WHAT ARE THE CATEGORIES OF OPERATING SYSTEMS?


Batch OS: Processes batches of jobs sequentially without user interaction.
Time-Sharing OS: Allows multiple users to share system resources simultaneously.
Distributed OS: Manages multiple computers as a single system.
Real-Time OS: Provides immediate processing and response for time-critical tasks.
Network OS: Manages network resources and communication between connected
computers.
Embedded OS: Designed for embedded systems within larger devices.
Mobile OS: Tailored for mobile devices like smartphones and tablets.
Multiprocessing OS: Supports multiple CPUs for simultaneous processing.
Multi-User OS: Allows multiple users to access the system concurrently.
Single-User OS: Supports one active user session at a time.
Interactive OS: Enables direct user interaction via commands or GUI.
Hybrid OS: Combines features of different OS types, like time-sharing and real-time
processing.

CHRIST (Deemed to be University) 7


Operating System

WHAT ARE THE LAYERS IN THE OPERATING SYSTEM?

1. Hardware Layer: This layer interacts with the system hardware and coordinates
with all the peripheral devices used, such as a printer, mouse, keyboard, scanner, etc.
2. CPU Scheduling Layer: This layer deals with scheduling the processes for the CPU.
Many scheduling queues are used to handle processes. When the processes enter the
system, they are put into the job queue.
3. Memory Management: Memory management deals with memory and moving
processes from disk to primary memory for execution and back again. This is handled

CHRIST (Deemed to be University) 8


Operating System

by the third layer of the operating system. All memory management is associated with
this layer.
4. Process Management Layer: This layer is responsible for managing the processes,
i.e., assigning the processor to a process and deciding how many processes will stay
in the waiting schedule.
5. I/O Buffer Layer: I/O devices are very important in computer systems. They provide
users with the means of interacting with the system. This layer handles the buffers
for the I/O devices and makes sure that they work correctly.
6. User Programs: This is the highest layer in the layered operating system. This layer
deals with the many user programs and applications that run in an operating system,
such as word processors, games, browsers, etc.

EXPLAIN THE OPERATING SYSTEM SERVICES

Operating-System Services are the functions and features provided by an operating system
to support and manage hardware and software resources. Key services include:

1. Process Management: Manages the execution of processes, including process


creation, scheduling, and termination.
Service: Ensures efficient CPU utilization and multitasking.
2. Memory Management: Handles the allocation and deallocation of memory,
including virtual memory management.
Service: Optimizes memory use and isolates processes.
3. File System Management: Manages the organization, storage, retrieval, naming, and
access of files on storage devices.
Service: Provides a structured and secure way to store and access data.

CHRIST (Deemed to be University) 9


Operating System

4. Device Management: Controls and coordinates hardware devices through device


drivers.
Service: Facilitates communication between hardware and software, ensuring proper
operation of peripherals.
5. Security and Access Control: Enforces user authentication and manages
permissions to protect data and system resources.
Service: Prevents unauthorized access and ensures data integrity and confidentiality.
6. User Interface: Provides a means for users to interact with the system, through
command-line interfaces (CLI) or graphical user interfaces (GUI).
Service: Enhances user experience and accessibility.
7. Networking: Manages network connections and communication between
computers.
Service: Facilitates data exchange and network resource sharing.
8. Backup and Recovery: Provides mechanisms for data backup and restoration in case
of system failure.
Service: Ensures data integrity and availability in the event of hardware or software
issues.

These services collectively enable the operating system to effectively manage system
resources, provide a user-friendly environment, and support application execution.

WHAT ARE THE DUAL MODE OPERATIONS OF AN OS?

Dual-mode Operation refers to the CPU operating in two modes: User Mode for running
applications with restricted access to system resources, and Kernel Mode for executing
system-level operations with full access to hardware. This separation enhances system
security and stability by isolating user processes from critical system functions.

WHAT IS KERNEL?

The kernel is the core part of an operating system that manages hardware resources, handles
system calls, and provides essential services such as process and memory management, file
system operations, and device control. It operates in privileged mode to ensure secure and
efficient system functioning.

CHRIST (Deemed to be University) 10


Operating System

WHAT IS A BOOTSTRAP PROGRAM?

A bootstrap program, or bootloader, initializes hardware and loads the operating system into
memory when the computer starts. It sets up the system and transfers control to the OS,
enabling it to manage the computer.

EXPLAIN THE STORAGE DEVICE HIERARCHY

THE storage device hierarchy represents the different levels of storage in a computer system,
ordered by their speed, capacity, and cost.

CHRIST (Deemed to be University) 11


Operating System

This hierarchy helps understand the trade-offs between speed, capacity, and cost for
different storage solutions in a computer system.

WHAT IS MULTIPROGRAMMING?

Multiprogramming is a technique in batch systems where multiple programs are loaded


into memory and executed simultaneously to maximize CPU utilization. It allows the CPU to
switch between processes, keeping it busy by overlapping I/O operations and processing
tasks. This approach increases throughput and reduces idle time.

MEMORY LAYOUT FOR MULTIPROGRAMMED SYSTEM

CHRIST (Deemed to be University) 12


Operating System

WHAT IS MULTI-TASKING (TIME SHARING)?

Multitasking (Timesharing) is an operating system technique that allows multiple tasks or


processes to be executed seemingly simultaneously by rapidly switching between them. The
CPU allocates small time slices to each task, providing the illusion of concurrent execution
and enhancing system responsiveness and user interaction.

WHAT IS A MULTIPROCESSOR SYSTEM?

Multiprocessor System is a computer system that uses two or more processors (CPUs) to
perform tasks concurrently. This setup enhances performance, reliability, and processing
power by allowing multiple processors to work together on different or the same tasks.

Types:

○ Symmetric Multiprocessing (SMP): All processors have equal access to memory


and I/O resources, and they share tasks equally.
○ Asymmetric Multiprocessing (AMP): One processor (master) controls the system
and delegates tasks to other processors (slaves) with specialized roles.

HOW IS SYMMETRIC MULTIPROCESSING WORKING? (DUAL CORE)

Symmetric Multiprocessing (SMP) works by using multiple processors that share a


common memory and I/O system. Here’s how it operates:

1. Shared Memory: All processors have equal access to a single shared memory space,
allowing them to read from and write to the same memory locations.
2. Equal Access: Each processor can access all system resources, such as memory and
I/O devices, ensuring that no processor is privileged over another.

CHRIST (Deemed to be University) 13


Operating System

3. Task Distribution: The operating system distributes tasks among the processors.
Processes and threads can be executed in parallel, improving performance and
efficiency.
4. Synchronization: The system manages data consistency and synchronization
between processors to ensure that they do not conflict with each other or cause data
corruption.
5. Load Balancing: The operating system dynamically balances the workload across
processors to maximize resource utilization and system performance.

HOW DOES THE SYSTEM CALLS WORK IN THE OS?

System calls function as the interface between user applications and the operating system
kernel. Here’s a simplified overview of how system calls work in an operating system:

• Application Request: A user application makes a system call to request a service from
the operating system. This is typically done using a library function that wraps the
system call.
• System Call Invocation: The library function triggers a software interrupt or a special
CPU instruction to switch from user mode to kernel mode. This switch is necessary
because the kernel has higher privileges than user applications.
• Context Switch: The operating system performs a context switch, saving the state of
the user application and loading the kernel's state. This involves saving CPU registers
and memory information related to the user process.

CHRIST (Deemed to be University) 14


Operating System

• System Call Handler: The kernel identifies the system call request through a system
call number and invokes the appropriate system call handler or function within the
kernel. This handler executes the requested operation.
• Execution: The kernel performs the requested operation, such as file manipulation,
memory allocation, or process control. It accesses hardware resources or system data
as needed.
• Return to User Mode: After completing the system call, the kernel restores the state
of the user application and performs a context switch back to user mode. The results
of the system call are returned to the application.
• Application Continuation: The user application receives the results of the system call
and continues execution based on the information or changes made by the OS.

This process ensures that user applications can perform privileged operations securely and
efficiently while maintaining system stability and protection.

IMPORTANT TYPES OF SYSTEM CALLS

System calls can be categorized based on the type of service they provide to user
applications. Here are the main types:

1. Process Control: Manage processes, including their creation, execution, and termination.

2. File Management: Handle file operations such as creating, deleting, and accessing files.

3. Device Management: Manage and interact with hardware devices through input and
output operations.

4. Memory Management: Manage memory allocation and deallocation.

5. Communication: Facilitate inter-process communication and synchronization.

WHAT ARE SYSTEM PROGRAMS? WHAT IS THEIR ROLE IN OPERATING SYSTEM?


System programs are utilities that assist in the operation, management, and maintenance of
a computer system. They include:

1. Command-Line Utilities: Tools for executing system tasks via command-line


interfaces (e.g., ls, cp, rm).
2. System Libraries: Precompiled routines used by applications for common functions
(e.g., libc, Windows API).
3. Shells: Command interpreters for user interaction (e.g., Bash, PowerShell).
4. File Management Utilities: Programs for handling files and directories (e.g., file, tar,
gzip).

CHRIST (Deemed to be University) 15


Operating System

5. System Monitoring Tools: Utilities for observing system performance and resource
usage (e.g., top, vmstat).
6. Backup and Recovery Programs: Tools for data backup and restoration (e.g., rsync,
dump).
7. Security Tools: Programs for protecting the system and ensuring data security (e.g.,
firewalld, chkrootkit).
8. System Configuration Tools: Utilities for configuring system settings (e.g., systemd,
ifconfig).

These programs facilitate system management, enhance functionality, and provide user
interaction with the operating system.

WHAT IS AN INTERRUPT? HOW IT IS WORKING?

An interrupt is a signal sent to the CPU indicating that an event needs immediate attention.
It temporarily halts the current process, allowing the operating system to address the event
before resuming the previous task.

How does it work?

1. Interrupt Request: An interrupt signal is generated by hardware or software (e.g., I/O


device, timer, or software request).
2. Interrupt Handling: The CPU pauses its current execution and saves its state
(context), including program counter and registers.
3. Interrupt Service Routine (ISR): The operating system’s interrupt handler (ISR) is
invoked to address the interrupt. This routine performs necessary tasks, such as
handling device input or responding to a timer event.
4. Context Restoration: After the ISR completes, the CPU restores the previous state and
resumes the interrupted process.

Interrupts allow the operating system to efficiently manage and respond to various
events and ensure timely processing of important tasks.

CHRIST (Deemed to be University) 16


Operating System

CHRIST (Deemed to be University) 17


Operating System

PROCESS MANAGEMENT

WHAT IS A PROCESS?
A process in an operating system is a program in execution. It includes the program code, its
current activity, and associated resources like memory, CPU registers, and I/O operations. A
process is the fundamental unit of work in an OS, and the OS manages multiple processes by
allocating resources, scheduling execution, and handling communication between them.

DIFFERENTIATE PROCESS VS THREAD


A process in an operating system is a program in execution. It includes the program code, its
current activity, and associated resources like memory, CPU registers, and I/O operations. A
process is the fundamental unit of work in an OS, and the OS manages multiple processes by
allocating resources, scheduling execution, and handling communication between them.

A THREAD is the smallest unit of execution within a process. It operates independently and
shares resources with other threads in the same process, allowing for concurrent
execution.

WHAT ARE THE OPERATIONS ON PROCESSES?

1. Creation: Starting a new process using system calls like fork().


2. Execution: Running the process on the CPU.
3. Suspension: Pausing a process, either by moving it to the wait queue or swapping it
out of memory.
4. Resumption: Restarting a paused or suspended process from where it left off.
5. Termination: Ending a process, either normally or due to an error, and releasing its
resources.

WHAT ARE THE STATES OF A PROCESS? (QUEUING DIAGRAM FOR PROCESS


SCHEDULING)

As a process executes, it changes state

New: The process is being created


Running: Instructions are being executed

Waiting: The process is waiting for some event to occur

Ready: The process is waiting to be assigned to a processor


Terminated: The process has finished execution

CHRIST (Deemed to be University) 18


Operating System

WHAT IS A PROCESS CONTROL BLOCK?

A Process Control Block (PCB) is a data structure in an operating system that contains
important information about a specific process. It includes:

1. Process ID (PID): Unique identifier for the process.


2. Process State: Current status (e.g., ready, running).
3. Program Counter: Address of the next instruction.
4. CPU Registers: Saved CPU state for the process.
5. Memory Management Info: Details on memory allocation.
6. I/O Status Info: Information on I/O devices and files.
7. Process Priority: Priority level for scheduling.

The PCB is essential for process management and context switching.

CHRIST (Deemed to be University) 19


Operating System

WHAT IS A ZOMBIE PROCESS? A terminated process still in the process table because its
exit status has not been read by the parent process.

WHAT IS AN ORPHAN PROCESS? A process whose parent has terminated, making it


adopted by the init process or another designated process.

WHAT IS A DAEMON PROCESS? A daemon process is a background process that runs


continuously, providing system or application services without user interaction.

WHAT IS THE READY QUEUE AND WAIT QUEUE?

Ready Queue: Contains processes waiting for CPU time, ready to execute.
Wait Queue: Contains processes waiting for an event or resource to become available.
Context switches allow multiple processes to share a single CPU efficiently, enabling
multitasking.

WHAT ARE INDEPENDENT PROCESSES?


Two processes are said to be independent if the execution of one process does not affect the
execution of another process.

WHAT ARE COOPERATIVE PROCESSES?


Two processes are said to be cooperative if the execution of one process affects the execution
of another process. These processes need to be synchronized so that the order of execution
can be guaranteed.

WHAT IS INTER-PROCESS COMMUNICATION? HOW IT IS WORKING?

Inter-process communication (IPC) refers to the mechanisms that allow processes to


exchange data and coordinate their actions while running concurrently. IPC is crucial for
processes that need to share information or work together, particularly in multitasking and
distributed systems.

Key IPC Mechanisms:


1. Message Passing: Processes send and receive messages.
o Working:
§ Send: One process sends a message to another.
§ Receive: The receiving process retrieves the message.
o Example: Client-server communication.
2. Shared Memory: Processes share a common memory space.
o Working:
§ Attach: Processes map shared memory into their address space.
§ Access: They read and write directly to the shared memory.
o Example: Data exchange through a shared buffer.

CHRIST (Deemed to be University) 20


Operating System

3. Pipes: Unidirectional communication channel.


o Working:
§ Create: A pipe is created.
§ Write: Data is sent to the pipe.
§ Read: Another process retrieves data from the pipe.
o Example: Chaining commands in a shell.
4. Sockets: Endpoints for network communication.
o Working:
§ Create: Sockets are created on each process.
§ Connect: Processes connect using sockets.
§ Send/Receive: Data is exchanged over the connection.
o Example: Web browser communicating with a server.
5. Semaphores: Synchronization primitives for resource management.
o Working:
§ Initialization: Semaphores are initialized.
§ Wait/Signal: Processes wait or signal the semaphore to control access.
o Example: Managing access to a shared resource like a printer.

IPC mechanisms facilitate communication and synchronization among processes, ensuring


efficient resource sharing and coordination in concurrent execution. Each mechanism has its
use cases, advantages, and trade-offs.

CHRIST (Deemed to be University) 21


Operating System

CHAPTER 2: PROCESS MANAGEMENT

WHAT IS A SCHEDULER?

A scheduler in an operating system is a component responsible for managing the execution


of processes. It determines which process in the ready queue gets CPU time and when,
ensuring efficient utilization of system resources and maintaining system responsiveness.

Types of schedulers:

1. Long-Term Scheduler (Job scheduler): Manages process admission into the system,
balancing the mix of processes.
2. Mid-Term Scheduler: Handles swapping processes in and out of memory.
3. Short-Term Scheduler (CPU scheduler): Selects which process in the ready queue
will be executed next by the CPU.

WHAT IS CPU BURST AND I/O BURST?

CPU Burst: The period during which a process uses the CPU for computations or tasks. It
involves continuous execution by the CPU without interruption.
I/O Burst: The period during which a process performs input/output operations, such as
reading from or writing to a disk or network, and is waiting for I/O operations to complete.

LIST THE IMPORTANT CPU SCHEDULING ALGORITHMS


1. First-Come, First-Served (FCFS)
2. Shortest Job First (SJF)
3. Shortest Remaining Time First (SRTF)
4. Round Robin (RR)
5. Priority Scheduling
6. Multilevel Queue Scheduling

WHAT ARE PREEMPTIVE AND NON-PREEMPTIVE SCHEDULING?

Non-Preemptive Scheduling:

Once a process starts executing, it runs to completion or until it voluntarily relinquishes the
CPU (e.g., by waiting for I/O). The CPU is not taken away forcibly.

Examples: First-Come, First-Served (FCFS), Shortest Job First (SJF).

Advantage: Simpler and with less context switching overhead.

Disadvantage: Can lead to longer wait times for high-priority processes if lower-priority
ones are running.

CHRIST (Deemed to be University) 22


Operating System

Preemptive scheduling:

The CPU can be taken away from a running process and allocated to another process. This
allows higher-priority processes to interrupt and replace lower-priority ones.

Examples: Round Robin, Shortest Remaining Time First (SRTF).

Advantage: Ensures that high-priority tasks receive timely CPU attention.

Disadvantage: Can lead to more frequent context switching, which may increase overhead.

WHAT IS CONTEXT SWITCHING?

A context switch is the process of saving the state of a currently running process and loading
the state of the next process to be executed by the CPU. It involves:

1. Saving: Storing the current process's state, including its CPU registers, program
counter, and memory management information, into its Process Control Block (PCB).
2. Loading: Retrieving the state of the next process from its PCB and restoring it to the
CPU.

WHAT IS A DISPATCHER? WHAT IS DISPATCH LATENCY?


A dispatcher handles context switching by saving the state of the currently running process
and loading the state of the next process, then allocating CPU time to the selected process.

Dispatch latency is the time it takes for the dispatcher to stop one process and start another
running

WHAT ARE THE IMPORTANT SCHEDULING CRITERIA?


1. CPU utilization – keep the CPU as busy as possible
2. Throughput – Number of processes that complete their execution per time unit
3. Turnaround time – amount of time to execute a particular process
4. Waiting time – amount of time a process has been waiting in the ready queue
5. Response time – amount of time it takes from when a request was submitted until
the first response is produced, not output (for time-sharing environment)

Turnaround Time (TAT) is the total time from when a process arrives to when it completes.

Formula
Turnaround Time=Completion Time−Arrival Time
Example: If a process arrives at time 2 and completes at time 10: TAT=10−2=8

CHRIST (Deemed to be University) 23


Operating System

HOW THE FOLLOWING PAIRS OF SCHEDULING CRITERIA CONFLICT IN CERTAIN


SETTINGS?
A. CPU UTILIZATION AND RESPONSE TIME.
B. AVERAGE TURNAROUND TIME AND MAXIMUM WAITING TIME.
C. I/O DEVICE UTILIZATION AND CPU UTILIZATION.

In operating systems, scheduling criteria often conflict with one another, leading to trade-
offs. Here’s a discussion of how specific pairs of scheduling criteria can conflict:

1. CPU Utilization and Response Time

• CPU Utilization: Refers to the percentage of time the CPU is actively processing tasks.
High CPU utilization is generally desirable to maximize resource usage.
• Response Time: The time taken from when a request is made until the first response
is received. Lower response times are preferred for interactive processes.

Conflict:

• In systems optimized for high CPU utilization, longer jobs may be prioritized,
resulting in increased queuing times for short, interactive tasks. This can lead to
higher response times for users who expect quick feedback.
• Conversely, if the system focuses on minimizing response time (e.g., by prioritizing
short tasks), CPU utilization may decrease as longer tasks are starved of CPU time,
leading to inefficiencies and potentially underutilized resources.

2. Average Turnaround Time and Maximum Waiting Time

• Average Turnaround Time: The average time taken to execute a process from
submission to completion. It includes waiting time, execution time, and any other
delays.
• Maximum Waiting Time: The longest time that any process has to wait in the queue
before it starts execution.

Conflict:

• To minimize average turnaround time, scheduling algorithms (like Shortest Job


First) may prioritize shorter tasks, allowing them to complete quickly and thus
improving the average. However, this can lead to longer waiting times for larger jobs,
resulting in increased maximum waiting time.
• Conversely, if the goal is to minimize the maximum waiting time by ensuring that
all jobs get a chance to execute in a timely manner (e.g., through round-robin
scheduling), it may lead to increased average turnaround time as longer processes
may not get sufficient CPU time.

CHRIST (Deemed to be University) 24


Operating System

3. I/O Device Utilization and CPU Utilization

• I/O Device Utilization: Refers to the efficient use of I/O devices, ensuring they are
actively processing requests rather than idling.
• CPU Utilization: As previously mentioned, this is the degree to which the CPU is
actively working on tasks.

Conflict:

• When optimizing for I/O device utilization, processes that require I/O operations
may be favored, leading to periods where the CPU is idle while waiting for I/O
operations to complete. This results in lower overall CPU utilization.
• On the other hand, focusing on CPU utilization may involve keeping the CPU busy
with CPU-bound tasks, which can lead to increased idle time for I/O devices. If I/O-
bound tasks are delayed, I/O devices may not be used efficiently, resulting in wasted
potential.

These conflicts highlight the trade-offs inherent in process scheduling. Achieving an optimal
balance often requires careful consideration of the specific workload and system goals, and
different scheduling algorithms may be employed based on the desired outcomes in a given
environment.

WHAT ARE THE SCHEDULING ALGORITHM OPTIMIZATION CRITERIA?


● Max CPU utilization
● Max throughput
● Min turnaround time
● Min waiting time
● Min response time

FIRST-COME, FIRST-SERVED (FCFS) SCHEDULING PRINCIPLE

FCFS is a simple CPU scheduling algorithm where the process that arrives first in the ready
queue is executed first. It operates on a non-preemptive basis, meaning once a process starts
execution, it runs to completion before the next process is scheduled.

Characteristics:

● Order of Execution: Processes are executed in the order they arrive.


● Non-Preemptive: The CPU is not taken away from a process once it starts running.
● Performance: It can lead to the "convoy effect," where short processes are delayed
by longer processes, causing longer average wait times.

CHRIST (Deemed to be University) 25


Operating System

Advantages:

● Simple and easy to implement.


● Predictable, as processes are executed in the order they arrive.

Disadvantages:

● Can lead to poor utilization of CPU if a long process arrives first.


● High average waiting time, especially if there is a mix of short and long processes.

Illustration:

WHAT IS CONVOY EFFECT?


The convoy effect occurs in CPU scheduling when a set of processes get delayed because a
longer process is holding the CPU, causing shorter processes to wait. This typically happens
in non-preemptive scheduling algorithms like First-Come, First-Served (FCFS).

CHRIST (Deemed to be University) 26


Operating System

SHORTEST JOB FIRST (SJF) SCHEDULING PRINCIPLE

SJF Scheduling is a CPU scheduling algorithm where the process with the shortest burst
time (the time required to complete the process) is executed first.

Characteristics:

● Order of Execution: Processes are selected based on the shortest CPU burst time.
● Preemptive or Non-Preemptive: SJF can be implemented in both ways:
○ Non-Preemptive SJF: Once a process starts, it runs to completion.
○ Preemptive SJF (also known as Shortest Remaining Time First, SRTF): If a
new process arrives with a shorter burst time than the remaining time of the
current process, the CPU is preempted to execute the new process.

Advantages:

● Minimizes the average waiting time.


● Efficient for batch processing with known burst times.

Disadvantages:

● Difficult to implement in real-time systems, as it requires precise knowledge of


future burst times.
● Can lead to the "starvation" of longer processes if short processes keep arriving.

Illustration:

CHRIST (Deemed to be University) 27


Operating System

CHRIST (Deemed to be University) 28


Operating System

PRIORITY SCHEDULING PRINCIPLE

Priority Scheduling is a CPU scheduling algorithm where each process is assigned a


priority, and the CPU is allocated to the process with the highest priority.

Characteristics:

● Priority Assignment: Each process has a priority level, which can be determined
based on various factors like the importance of the task, resource requirements, or
user input.
● Order of Execution: Processes with higher priority are executed before those with
lower priority.
● Preemptive or Non-Preemptive:
○ Preemptive Priority Scheduling: If a new process arrives with a higher
priority than the currently running process, the CPU is preempted and
assigned to the new process.
○ Non-Preemptive Priority Scheduling: The CPU is allocated to a process and
runs to completion, even if a higher-priority process arrives.

Advantages:

● Ensures that important tasks receive CPU time quickly.


● Can be tailored to meet specific system requirements by adjusting priorities.

Disadvantages:

● Starvation or Indefinite blocking: Lower-priority processes might never get


executed if higher-priority processes keep arriving.
● Solution to Starvation: Aging, a technique where the priority of a process is
gradually increased the longer it waits in the queue.

Priority Scheduling is widely used in systems where certain tasks are more critical than
others, ensuring that crucial processes are completed promptly.

Illustration:

CHRIST (Deemed to be University) 29


Operating System

ROUND ROBIN PRINCIPLE

Round Robin (RR) is a preemptive CPU scheduling algorithm designed to allocate CPU time
to each process in the ready queue in a cyclic order, ensuring that all processes get an equal
share of the CPU.

Characteristics:

● Time Quantum: A fixed time slice or time quantum is assigned to each process. The
CPU is allocated to each process for a time equal to this quantum.
● Cyclic Order: Processes are placed in a queue, and the CPU cycles through them,
allocating the time quantum to each process in turn.
● Preemptive: If a process doesn’t finish within its time quantum, it is preempted,
moved to the back of the queue, and the next process is given the CPU.

Advantages:

● Fairness: All processes are treated equally, with no priority given to any specific
process.
● Responsiveness: Suitable for time-sharing systems, providing a reasonable
response time for interactive users.

CHRIST (Deemed to be University) 30


Operating System

Disadvantages:

● Time Quantum Size: The performance depends heavily on the size of the time
quantum. If too small, it leads to excessive context switching; if too large, it behaves
like First-Come, First-Served (FCFS) scheduling.
● Overhead: Frequent context switching can cause overhead, affecting system
performance.

Round Robin is commonly used in multitasking and time-sharing systems to ensure that all
processes get a fair share of CPU time.

Illustration:

CHRIST (Deemed to be University) 31


Operating System

SHORTEST REMAINING TIME FIRST PRINCIPLE

SRTF is a preemptive version of the Shortest Job First (SJF) scheduling algorithm. In SRTF,
the process with the shortest remaining CPU burst time is selected for execution. If a new
process arrives with a shorter remaining time than the current process, the CPU is
preempted and allocated to the new process.

Characteristics:

● Preemptive: The currently running process can be interrupted if a new process


with a shorter remaining burst time arrives.
● Dynamic Selection: Continuously compares the remaining burst times of the
current process and incoming processes.
● Optimal for Waiting Time: Minimizes average waiting time compared to other
scheduling algorithms.

Advantages:

● Efficiency: Typically results in lower average turnaround and waiting times


compared to non-preemptive SJF.
● Responsive to Short Jobs: Short processes get quicker access to the CPU.

Disadvantages:

● Starvation or indefinite blocking: Longer processes may be indefinitely delayed if


short processes keep arriving.
● Complexity: More complex to implement, as it requires continuous tracking of the
remaining burst times.

SRTF is particularly effective in environments where process execution times vary


significantly, as it optimizes CPU usage by prioritizing shorter tasks.

Illustration:

CHRIST (Deemed to be University) 32


Operating System

CHRIST (Deemed to be University) 33


Operating System

PROCESS SYNCHRONIZATION

DIFFERENTIATE INDEPENDENT PROCESSES vs COOPERATIVE PROCESSES


Two processes are said to be independent if the execution of one process does not affect the
execution of another process.

Two processes are said to be cooperative if the execution of one process affects the execution of
another process. These processes need to be synchronized so that the order of execution can be
guaranteed.

WHAT IS A RACE CONDITION? ILLUSTRATE WITH AN EXAMPLE.

At the time when more than one process is either executing the same code or accessing the same
memory or any shared variable; In that condition, there is a possibility that the output or the
value of the shared variable is wrong so for that purpose all the processes are doing the race to
say that my output is correct. This condition is commonly known as a race condition. As several
processes access and process the manipulations on the same data in a concurrent manner and
due to which the outcome depends on the particular order in which the access of data takes place.
Consider the following scenario:

When these two processes Process 1 and Process 2 are parallelly executed, both processes end
up with inconsistent value as 11 or 9 for ‘shared’.

A race condition occurs when the outcome of a process depends on the timing or sequence of
uncontrollable events, typically when multiple processes or threads access shared resources
concurrently without proper synchronization.

CHRIST (Deemed to be University) 34


Operating System

WHAT IS THE NEED FOR PROCESS SYNCHRONIZATION?


It is the task phenomenon of coordinating the execution of processes in such a way that no two
processes can have access to the same shared data and resources.
● It is a procedure that is involved in order to preserve the appropriate order of
execution of cooperative processes.
● In order to synchronize the processes, there are various synchronization
mechanisms.
● Process Synchronization is mainly needed in a multi-process system when multiple
processes are running together, and more than one processes try to gain access to the
same shared resource or any data at the same time.

LIST THE CLASSICAL PROCESS SYNCHRONIZATION PROBLEMS


Classic process synchronization problems in operating systems are fundamental challenges
that deal with coordinating multiple processes or threads to ensure correct execution and
resource sharing. Key problems include:

1. Critical Section Problem


2. Producer-Consumer Problem
3. Dining Philosophers Problem
4. Readers-Writers Problem
5. Sleeping Barber problem

WHAT IS A CRITICAL SECTION?


A critical section is a part of a program where a process or thread accesses shared resources,
such as variables, data structures, or hardware devices. Since multiple processes might try to
enter their critical sections simultaneously, proper synchronization is necessary to ensure that
only one process accesses the shared resource at a time, preventing data corruption and ensuring
consistency.

CLASSICAL PROBLEM #1: WHAT IS THE CRITICAL SECTION PROBLEM? ILLUSTRATE WITH
AN EXAMPLE.
The critical section problem is the challenge of ensuring that when multiple processes or threads
access shared resources, only one can be in its critical section at any given time. This prevents
race conditions and ensures data consistency.

Illustration:

Imagine two processes, 1 and 2, that both want to update a shared bank account balance:

1. Initial State: The account balance is Rs.5000.


2. Process 1 starts to withdraw Rs.4000:
○ Reads the balance (Rs.5000).
○ Calculates the new balance (Rs.5000 - Rs.4000 = Rs.1000).

CHRIST (Deemed to be University) 35


Operating System

○ Before it can write the new balance back, it gets interrupted.


3. Process 2 starts to withdraw Rs.1000:
○ Reads the balance (Rs.5000).
○ Calculates the new balance (Rs.5000 - Rs.1000 = Rs.4000).
○ Writes the new balance (Rs.4000) back to the account.
4. Process 1 resumes and writes its calculated balance (Rs.1000) back to the account.

Problem:

After both operations, the balance should be Rs.0 (Rs.5000 - Rs.4000 - Rs.1000). However, due
to the lack of synchronization, the final balance is Rs.1000, as Process 1 overwrote the update
made by Process 2. This incorrect result arises because both threads accessed the critical section
(updating the balance) without proper coordination.

Solution design:

Each process must ask permission to enter the critical section in the Entry section, may follow
critical section with Exit section, then Remainder section.

Entry Section: The part of the code where a process or thread attempts to enter its critical
section. This section includes the mechanism to request access to the critical section, such as
acquiring a lock or setting flags to indicate intent to enter.

Purpose: Ensure that only one process or thread can enter the critical section at a time by
checking and setting necessary synchronization variables.

Critical Section: The section of code where the process or thread accesses and modifies
shared resources. Only one process or thread should be allowed to execute this section at
any time.

Purpose: Perform operations that require exclusive access to shared resources to prevent
conflicts and ensure data consistency.

CHRIST (Deemed to be University) 36


Operating System

Exit Section: The part of the code where the process or thread exits the critical section and
releases any locks or synchronization mechanisms that were used to gain access.

Purpose: Allow other processes or threads to enter their critical sections by signaling that
the critical section is now free.

Remainder Section: The part of the code where the process or thread performs tasks that
do not involve shared resources. This section runs after exiting the critical section and
before the process or thread may attempt to enter the critical section again.

Purpose: Perform any non-critical operations or computations that do not require


exclusive access to shared resources.

WHAT ARE THE REQUIREMENTS OF THE SOLUTIONS TO THE CRITICAL SECTION


PROBLEM?
A solution to the critical section problem must satisfy the following three conditions:

1. Mutual Exclusion: Out of a group of cooperating processes, only one process can be in its
critical section at a given point of time.

2. Progress: If no process is in its critical section, and if one or more threads want to execute
their critical section then any one of these threads must be allowed to get into its critical
section.

3. Bounded Waiting: There must be a limit on the number of times a process or thread can
be bypassed by other processes or threads before it gains access to the critical section.

CLASSICAL PROBLEM #2: PRODUCER-CONSUMER PROBLEM (BOUNDED-BUFFER


PROBLEM)

The Producer-Consumer Problem is a classic synchronization issue that involves


coordinating two types of processes: producers and consumers, which interact with a shared
buffer or queue. The main challenge is to synchronize the operations of these processes to
ensure correct and efficient use of the buffer.

CHRIST (Deemed to be University) 37


Operating System

Description of the problem

● Producer: Generates data or items and adds them to the shared buffer.
● Consumer: Retrieves and processes data or items from the buffer.
● Buffer: A finite-size shared data structure that holds items produced by the producer
until they are consumed.

Representation of Producer:
while(true) {
/* produce an item in next produced */

while (counter == BUFFER_SIZE) ;


/* do nothing */
buffer[in] = next_produced;
in = (in + 1) % BUFFER_SIZE;
counter++;
}

Representation of Consumer:
while (true) {
while (counter == 0)
; /* do nothing */
next_consumed = buffer[out];
out = (out + 1) % BUFFER_SIZE;
counter--;
/* consume the item in next consumed */
}

Challenges

1. Buffer Overflow: Prevent the producer from adding items when the buffer is full.
2. Buffer Underflow: Prevent the consumer from removing items when the buffer is
empty.
3. Synchronization: Ensure that access to the buffer is managed so that producers and
consumers do not interfere with each other.

LIST A FEW IMPORTANT SOLUTIONS ADAPTED FOR PROCESS SYNCHRONIZATION.

1. Peterson's solution
2. Locks
3. Semaphores

SOLUTION #1: PETERSON'S SOLUTION (SOFTWARE-BASED SOLUTION)

This is a widely used and software-based solution to critical section problems. With the help
of this solution whenever a process is executing in any critical state, then the other process

CHRIST (Deemed to be University) 38


Operating System

only executes the rest of the code, and vice-versa can happen. This method also helps to make
sure that only a single process can run in the critical section at a specific time.

Assume there are two processes P1 and P2. The solution is given as follows:

P1
interest [P1] = True;
turn = 2;
while (interest[P2]==True & turn==2); //P1 will wait
CRITICAL SECTION //When any one of the conditions is false then P1
will enter into Critical section;
interest[P1]=False;

P2
interest [P2] = True;
turn = 1;
while (interest[P1]==True & turn==1); //P2 will wait
CRITICAL SECTION //When any one of the conditions is false then P2
will enter into Critical section;
interest[P2]=False;

This solution preserves all three conditions:

● Mutual Exclusion is comforted as at any time only one process can access the critical
section.
● Progress is also comforted, as a process that is outside the critical section is unable to
block other processes from entering into the critical section.
● Bounded Waiting is assured as every process gets a fair chance to enter the Critical
section.

SOLUTION #2: SYNCHRONIZATION HARDWARE (HARDWARE BASED SOLUTION)

do {
acquire lock; //Lock the address space access
critical section //Enter into critical section
release lock; //Release the address space
remainder section;
} while (TRUE);

CHRIST (Deemed to be University) 39


Operating System

SOLUTION #3: MUTEX (SOFTWARE-BASED SOLUTION)

A mutex (short for "mutual exclusion") is a synchronization primitive used to manage access
to shared resources in a concurrent system. It ensures that only one thread or process can
access a critical section of code or a shared resource at any given time, thereby preventing
race conditions and ensuring data consistency.

Characteristics:

1. Mutual Exclusion: A mutex allows only one thread to enter the critical section at a
time. If one thread holds the mutex, other threads attempting to acquire the mutex
must wait until it is released.
2. Lock and Unlock Operations:
○ Lock (Acquire): When a thread acquires a mutex, it gains exclusive access to
the shared resource or critical section. If the mutex is already locked by
another thread, the requesting thread will block until the mutex becomes
available.
○ Unlock (Release): When a thread is done with the critical section, it releases
the mutex, allowing other waiting threads to acquire it and proceed.

Solution design:
do {
acquire lock();
critical section
release lock();
remainder section
} while (true);

//Acquire function
acquire() {
while (!available)
; /* busy wait */
available = false;
}

//Release function
release() {

CHRIST (Deemed to be University) 40


Operating System

available = true;
}

SOLUTION #4: SEMAPHORE (SOFTWARE-BASED SOLUTION)

A semaphore is a synchronization primitive used in concurrent programming to manage


access to shared resources and control process or thread execution. Semaphores help in
coordinating activities between processes or threads, ensuring that only a specified number
of processes can access a resource simultaneously.

Key Characteristics:

1. Counting Mechanism: Semaphores use a counter to keep track of the number of


resources available or to limit the number of processes that can access a critical
section.
2. Two Main Types:
○ Binary Semaphore: Also known as a mutex, it has only two states: 0 and 1. It
provides mutual exclusion by allowing only one thread or process to enter the
critical section at a time.
○ Counting Semaphore: Can have a range of values, typically initialized to the
number of available resources. It allows a fixed number of threads or
processes to access the critical section or resource simultaneously.
3. Operations:
○ Wait (P Operation): Decrements the semaphore value. If the value is greater
than 0, the operation proceeds. If the value is 0, the process or thread is
blocked until the value is greater than 0.
○ Signal (V Operation): Increments the semaphore value. If there are any
processes or threads waiting, one of them is awakened.

// Semaphore S – integer variable


S=1;
Check (S>0)
Call Wait() //Also called as P()
Enter Critical section;
Call Signal() //Also called as V()
Remainder section;

//wait() function

Wait(S) {
while (S <= 0)
; // busy wait
S--;
}

CHRIST (Deemed to be University) 41


Operating System

//Signal() function

Signal(S) {
S++;
}

WHAT ARE THE TWO TYPES OF SEMAPHORES?


Binary Semaphore: Ensures mutual exclusion by allowing only one process or thread to
access the critical section at a time.
Counting Semaphore: Manages a pool of resources, allowing a specified number of
processes or threads to access resources concurrently.

CLASSICAL PROBLEM 3: DINING PHILOSOPHERS PROBLEM

The Dining Philosophers Problem is a classic synchronization problem that illustrates issues
related to resource sharing and deadlock in concurrent systems. It involves a group of
philosophers sitting around a circular table, where each philosopher alternates between
thinking and eating. The challenge is to manage the allocation of shared resources (forks) to
avoid deadlock and ensure that all philosophers can eat.

Problem Description:

● Philosophers: Five philosophers sit around a table. Each philosopher needs two
forks to eat.
● Forks: There is a fork between each pair of adjacent philosophers. A philosopher
must pick up both adjacent forks to eat.

CHRIST (Deemed to be University) 42


Operating System

● Goal: Ensure that all philosophers get a chance to eat without causing deadlock or
starvation.

Challenges:

1. Deadlock: A situation where no philosopher can proceed because each is waiting for
a fork held by another, resulting in a circular wait.
2. Starvation: A situation where a philosopher might never get both forks if the
resources are not allocated fairly.

The structure of Philosopher i :

while (true){
wait (chopstick[i] );
wait (chopStick[ (i + 1) % 5] );
/* eat for awhile */

signal (chopstick[i] );
signal (chopstick[ (i + 1) % 5] );
/* think for awhile */

DINING PHILOSOPHER PROBLEM - SOLUTION WITH MUTEX

Initialize: Each fork is represented by a mutex.

Acquire Forks: Each philosopher will try to acquire the mutexes for the left and
right forks.

Release Forks: After eating, the philosopher releases the mutexes.

DINING PHILOSOPHER PROBLEM - SOLUTION WITH SEMAPHORES

Initialization: Use a semaphore for each fork to manage access.

Acquiring Forks: Each philosopher needs to acquire two semaphores (one for the
left fork and one for the right fork).

Releasing Forks: After eating, the philosopher releases both semaphores.

CHRIST (Deemed to be University) 43


Operating System

CHAPTER 3: DEADLOCK & MEMORY MANAGEMENT

WHAT IS A DEADLOCK?

Deadlock is a condition in a multi-threaded or multi-process environment where two or


more processes or threads are unable to proceed because each is waiting for the other to
release a resource, resulting in a standstill. This leads to a situation where none of the
processes can continue, causing a system halt.

EXAMPLE OF DEADLOCK

Consider a simplified example with two processes and two resources:

● Resources: R1 and R2
● Processes: P1 and P2

Scenario:

1. P1 holds R1 and requests R2.


2. P2 holds R2 and requests R1.

Here’s a timeline of the situation:

● Time 1: P1 acquires R1.


● Time 2: P2 acquires R2.
● Time 3: P1 requests R2, but P2 holds R2.
● Time 4: P2 requests R1, but P1 holds R1.

Result: Both processes are in a waiting state, causing a deadlock.

CHRIST (Deemed to be University) 44


Operating System

NECESSARY CONDITIONS FOR DEADLOCK OR DEADLOCK CHARACTERIZATION

Deadlocks can be characterized by four necessary conditions, often referred to as the


Coffman conditions:

1. Mutual Exclusion: At least one resource must be held in a non-shareable mode,


meaning only one process can use the resource at any given time. If another process
requests the resource, it must wait until the current process releases it.
2. Hold and Wait: A process holding at least one resource is allowed to request
additional resources without releasing its currently held resources.
3. No Preemption: Resources cannot be forcibly taken from processes holding them;
they must be released voluntarily. In other words, a resource cannot be preempted
from a process holding it, even if another process needs it.

Circular Wait: There must be a circular chain of two or more processes, where each process
holds a resource that the next process in the chain is waiting for.

TRAFFIC DEADLOCK EXAMPLE

a) Four Necessary Conditions for Deadlock

1. Mutual Exclusion: Only one car can occupy a segment of the road.
2. Hold and Wait: Cars hold their positions while waiting to move forward.

CHRIST (Deemed to be University) 45


Operating System

3. No Preemption: Cars cannot be forced to leave their positions; they can only move
voluntarily.
4. Circular Wait: Each car is waiting for the car in front of it, creating a circular chain of
waiting.

Since all four conditions are met, a deadlock occurs.

(b) Rule for Avoiding Deadlock in traffic deadlock

Allow at least one car to exit the intersection before another enters, preventing a circular
waiting scenario.

CONCEPT OF RESOURCE ALLOCATION GRAPH (RAG)

A Resource Allocation Graph (RAG) is a directed graph used to represent the allocation of
resources in a system and the requests made by processes. It provides a visual and analytical
method to monitor resource allocations, detect potential deadlocks, and apply strategies to
avoid deadlock situations.

Structure of a Resource Allocation Graph (RAG)

1. Vertices (Nodes): There are two types of nodes in a RAG:


o Process nodes (P): Represent the processes in the system (e.g., P1, P2..)
o Resource nodes (R): Represent the resources (e.g., R1, ..), with each resource
node having multiple instances if applicable.
2. Edges (Arcs): There are two types of edges:
o Request Edge (P → R): A directed edge from a process to a resource
indicates that the process has requested that resource.
o Assignment Edge (R → P): A directed edge from a resource to a process
indicates that the resource is currently allocated to that process.

CHRIST (Deemed to be University) 46


Operating System

Example: RAG with Deadlock (Cycle formation)

Example: RAG with Deadlock (Without cycle formation)

How RAG is utilized in both deadlock avoidance and deadlock detection


techniques?

Using RAG in Deadlock Avoidance

In deadlock avoidance, the RAG is used to ensure that the system remains in a safe state (a
state where the system can allocate resources to each process in some order without causing
a deadlock). Deadlock avoidance techniques use the RAG as follows:

CHRIST (Deemed to be University) 47


Operating System

1. Safe Allocation Verification: When a process requests a resource, the system checks
the RAG to simulate the allocation. If granting the request results in a cycle in the RAG,
the request is deferred or denied, as it could lead to deadlock. If no cycle is formed,
the request is granted.
2. Cycle-Free RAG Maintenance: By preventing cycles in the RAG, deadlocks are
avoided. The system consistently evaluates each resource allocation against the RAG
to maintain a cycle-free structure.

In deadlock avoidance, the RAG helps keep the system in a safe state by denying requests
that might lead to cycles.

Using RAG in Deadlock Detection

In deadlock detection, the RAG is used to periodically examine the system for potential
deadlocks. Deadlock detection techniques use the RAG as follows:

1. Cycle Detection Algorithm: The RAG is checked periodically for cycles. If a cycle
exists in a system with each resource having a single instance, it indicates a deadlock,
as each process in the cycle is waiting on a resource held by another in the cycle.
2. Handling Multiple Resource Instances: In systems where resources have multiple
instances, a different approach, such as the Banker’s Algorithm, may be needed to
detect deadlocks based on the RAG information, as a cycle alone doesn’t necessarily
mean a deadlock.

In deadlock detection, the RAG is used to detect cycles, and if a cycle is found, it signals a
potential or actual deadlock. By utilizing the RAG, systems can manage resources more
efficiently, prevent deadlocks, and resolve issues quickly if they do arise.

STEPS INVOLVED IN DEADLOCK DETECTION AND EXPLAIN HOW RECOVERY CAN BE


ACHIEVED?

Deadlock detection involves identifying cycles in resource allocation to determine if


deadlock exists. Once a deadlock is detected, specific recovery actions can be taken to restore
the system to a functioning state. Here’s a breakdown of the process:

Steps in Deadlock Detection

1. Resource Allocation Graph (RAG) Analysis:


o The system monitors resource allocations and requests using a RAG.
o A cycle-detection algorithm is applied to the RAG to identify if a deadlock has
occurred (especially in single-instance resource systems).
2. Cycle Detection:
o In single-instance resource systems, the presence of a cycle directly implies
deadlock.

CHRIST (Deemed to be University) 48


Operating System

o In systems with multiple instances, more sophisticated methods (e.g., Banker’s


Algorithm or Wait-For Graphs) are used to detect circular wait conditions that
signal potential deadlocks.
3. Periodic Checks:
o Deadlock detection can be conducted periodically or whenever resource
contention increases, to avoid performance degradation in high-load systems.

Steps for Deadlock Recovery

1. Process Termination:
o Terminate All Deadlocked Processes: Ending all processes in the deadlock
cycle is a straightforward approach but can cause significant data loss.
o Terminate One Process at a Time: The system iteratively terminates
processes from the cycle until deadlock is resolved. This minimizes the impact
but may require careful selection of which process to terminate (e.g., based on
priority, runtime, or resources used).
2. Resource Preemption:
o Resources are forcefully reallocated from certain processes to others in the
deadlock cycle to break the circular wait.
o Selecting Processes for Preemption: The system evaluates criteria like
process priority, resource holding time, and the ease of restarting processes.
o Rollback Mechanism: For interrupted processes, rollback mechanisms allow
processes to resume from a previous safe state once resources are available
again.
3. Process Checkpointing:
o Periodically save the state of processes so they can resume from a saved state
after deadlock recovery.
o Checkpoints help mitigate the effects of preemption or termination by
allowing processes to restart without losing significant progress.

Deadlock detection involves identifying cycles in resource allocations, and recovery can be
achieved by terminating processes, preempting resources, and using process checkpointing for
smoother recovery. These techniques help ensure minimal disruption and effective
resolution of deadlocks in the system.

PROBLEM ON DEADLOCK DETECTION

Given problem: You have a system with four processes (P1, P2, P3, P4) and four resources
(R1, R2, R3, R4). Each resource has only one instance. The current allocation is as follows:
P1 holds R1 and is waiting for R2. P2 holds R2 and is waiting for R3. P3 holds R3 and is
waiting for R4. P4 holds R4 and is waiting for R1. Is a deadlock possible in this system?

Solution: Yes, a deadlock is possible in this system. Here is why:

The current allocation and wait conditions for each process are as follows:

CHRIST (Deemed to be University) 49


Operating System

• P1 holds R1 and is waiting for R2.


• P2 holds R2 and is waiting for R3.
• P3 holds R3 and is waiting for R4.
• P4 holds R4 and is waiting for R1.

Constructing the Wait-For Graph

In a Wait-For Graph, an edge from one process to another represents a wait condition (e.g.,
P1 → P2 if P1 is waiting for a resource held by P2). Based on the given information, we
have:

• P1 → P2: P1 is waiting for R2, held by P2.


• P2 → P3: P2 is waiting for R3, held by P3.
• P3 → P4: P3 is waiting for R4, held by P4.
• P4 → P1: P4 is waiting for R1, held by P1.

These conditions form a cycle in the Wait-For Graph:

P1 → P2 → P3 → P4 → P1

This circular dependency means each process is waiting for a resource held by the next
process in the cycle. Since there is a cycle in the Wait-For Graph, deadlock exists. Each
process is waiting on a resource held by another in the cycle, so none of the processes can
proceed. This satisfies all four necessary conditions for deadlock:

1. Mutual Exclusion: Each resource has only one instance, so only one process can hold
a resource at a time.
2. Hold and Wait: Each process is holding a resource and waiting for another.
3. No Preemption: Resources cannot be forcibly taken from a process.
4. Circular Wait: The Wait-For Graph shows a circular chain of processes, each waiting
on the next.

Therefore, deadlock is indeed possible in this system.

CHRIST (Deemed to be University) 50


Operating System

HOW DEADLOCK PREVENTION DIFFERS FROM DEADLOCK AVOIDANCE?

Feature Deadlock Prevention Deadlock Avoidance


Eliminates one or more deadlock Allocates resources based on safe
Objective
conditions to prevent deadlocks. state checks to avoid deadlocks.
Proactive (restricts resource Reactive (analyzes each request to
Approach
allocation in advance). maintain a safe state).
Resource Higher, as resources are allocated
Lower, due to strict allocation rules.
Utilization flexibly within safe states.
Easier to implement, limits system More complex, requires continuous
Complexity
flexibility. safe state monitoring.
Requiring preemption or Banker’s Algorithm for safe state
Example
disallowing hold and wait. verification.

• Prevention: Proactively restricts requests to avoid deadlock conditions.


• Avoidance: Dynamically grants resources if safe, optimizing resource use but with
higher complexity.

DEADLOCK AVOIDANCE METHODS – BANKERS ALGORITHM


[Refer GCR OF 3BCA B OPERATING SYSTEM for a sample worked out problem]

Problem 1:

Examine the following questions using the banker’s algorithm: Given resources A has 7
instances, B has 2 instances and C has 6 instances

a. Is the system in a safe state?


b. If a request from process P2 arrives for (0,0,1), can the request be granted immediately?

Problem 2:
Examine the following questions using the banker’s algorithm:
Allocation Max Available
ABCD ABCD ABCD

CHRIST (Deemed to be University) 51


Operating System

P0 0012 0012 1520


P1 1000 1750
P2 1354 2356
P3 0632 0652
P4 0014 0656
a. What is the content of the matrix Need?
b. Is the system in a safe state?
c. If a request from process P1 arrives for (0,4,2,0), can the request be granted immediately?

CHRIST (Deemed to be University) 52


Operating System

MEMORY MANANGEMENT

CONTIGUOUS MEMORY ALLOCATION


Contiguous memory allocation is a memory management technique where each process
receives a single, continuous block of memory. This makes addressing efficient, as the
process’s memory can be accessed through a starting address and block size.

• Single Block Allocation: Each process is allocated one contiguous memory block.
• Efficient Access: Easy to access and calculate addresses within the block.
• Fragmentation Issues:
o External Fragmentation: Free memory fragments form over time, reducing
efficiency.
o Internal Fragmentation: Allocated memory may exceed the process's needs,
leaving unused space.
Advantages:
• Simple to implement.
• Fast access due to contiguous layout.
Disadvantages:
• Prone to fragmentation.
• Limited flexibility, especially for large processes.

This approach is suitable for simple systems but is less used in modern systems, which prefer
non-contiguous allocation methods like paging and segmentation for better memory
utilization.

Example: Solve the following problem

Show how memory can be allocated using contiguous allocation.

A total physical memory size of 1000 KB. The memory is divided into fixed partitions, the
operating system uses contiguous memory allocation.

There are five processes that need to be loaded into memory: Process A: 200 KB Process B:
300 KB Process C: 100 KB Process D: 250 KB Process E: 150 KB

Solution:

In a contiguous memory allocation system, each process is allocated a single continuous


block of memory. Given a total memory size of 1000 KB divided into fixed partitions, the
operating system allocates each process to a partition large enough to fit it. Here's how
memory would be allocated:

• Total Physical Memory: 1000 KB


• Processes and their memory needs:
o Process A: 200 KB

CHRIST (Deemed to be University) 53


Operating System

o Process B: 300 KB
o Process C: 100 KB
o Process D: 250 KB
o Process E: 150 KB

Step-by-Step Allocation

Assuming the memory is allocated in the order the processes arrive and partitions are
allocated as available:

1. Partition 1: Process A requires 200 KB.


o Allocates 200 KB to Process A.
o Remaining Memory: 1000 KB - 200 KB = 800 KB.
2. Partition 2: Process B requires 300 KB.
o Allocates 300 KB to Process B.
o Remaining Memory: 800 KB - 300 KB = 500 KB.
3. Partition 3: Process C requires 100 KB.
o Allocates 100 KB to Process C.
o Remaining Memory: 500 KB - 100 KB = 400 KB.
4. Partition 4: Process D requires 250 KB.
o Allocates 250 KB to Process D.
o Remaining Memory: 400 KB - 250 KB = 150 KB.
5. Partition 5: Process E requires 150 KB.
o Allocates 150 KB to Process E.
o Remaining Memory: 150 KB - 150 KB = 0 KB.

Resulting Allocation

All processes fit within the 1000 KB of physical memory using contiguous allocation, filling
the memory completely with no space left over.

IMPACT OF DIFFERENT MEMORY ALLOCATION STRATEGIES

Different memory allocation strategies affect system performance in various ways,


particularly in terms of memory efficiency, speed, and fragmentation. Here’s a breakdown:

1. Contiguous Memory Allocation

• Performance: Generally fast memory access due to continuous memory blocks.


• Fragmentation:
o External Fragmentation: Frequent memory allocations and deallocations
create gaps, leading to inefficient memory use.
o Internal Fragmentation: Fixed-sized partitions may leave unused space
within allocated blocks.

CHRIST (Deemed to be University) 54


Operating System

• Flexibility: Low flexibility in accommodating large or varying process sizes; finding


contiguous free blocks can be challenging.

Impact: Simple and efficient for small systems but can lead to performance degradation over
time due to fragmentation.

2. Segmentation

• Performance: Provides faster access within segments but requires address


translation.
• Fragmentation:
o External Fragmentation: Larger segments can still lead to gaps between
memory segments, though less severe than contiguous allocation.
o Internal Fragmentation: Reduced, as segments are allocated based on actual
needs.
• Flexibility: Allows logical division of a program (e.g., code, data, stack), improving
memory management efficiency for complex processes.

Impact: Suitable for systems requiring logical memory division; segmentation provides
flexibility but may need compaction over time.

3. Paging

• Performance: Slower than contiguous allocation due to the need for additional
address translation, but manageable with Translation Lookaside Buffers (TLBs).
• Fragmentation:
o No External Fragmentation: Physical memory doesn’t need to be contiguous,
eliminating external fragmentation.
o Minimal Internal Fragmentation: Page sizes can reduce unused memory
within blocks.
• Flexibility: High flexibility, as processes can grow or shrink without needing
contiguous memory.

Impact: Ideal for large, complex systems; reduced fragmentation and high flexibility
outweigh the minor performance costs of address translation.

To summarize:

• Contiguous Allocation: Fast but prone to fragmentation; ideal for simpler systems.
• Paging: Eliminates external fragmentation and is highly flexible, making it suitable
for modern systems.
• Segmentation: Balances performance with logical memory structure but may
require compaction.

CHRIST (Deemed to be University) 55


Operating System

DIFFERENCES BETWEEN INTERNAL FRAGMENTATION & EXTERNAL FRAGMENTATION


Internal fragmentation involves wasted space within allocated blocks, while external
fragmentation involves scattered free memory. Both reduce memory utilization, but external
fragmentation can significantly hinder allocation efficiency and system performance due to
the need for compaction.

Effects on System Performance

• Internal Fragmentation:
o Impact: Reduces effective memory utilization; generally does not affect access
speed.
o Resource Waste: Leads to inefficient use of memory resources.
• External Fragmentation:
o Impact: Can prevent large allocations, causing allocation failures; affects
overall memory efficiency.
o Resource Management: Compaction is resource-intensive, requiring CPU
time and potentially disrupting processes.

WHAT IS PAGING AND ITS IMPORTANCE IN MEMORY MANAGEMENT


Paging is a memory management technique that helps operating systems efficiently manage
memory by dividing it into fixed-size blocks called pages.
Paging improves memory management by eliminating fragmentation, supporting virtual
memory, optimizing allocation, ensuring process isolation, and simplifying address
translation. This results in a more efficient and organized use of memory resources.

PURPOSE OF PAGE TABLE


The page table is a crucial data structure in memory management, particularly for paging
systems. The page table is essential for translating addresses, managing memory, supporting
virtual memory, tracking page status, ensuring process isolation, and facilitating efficient
page replacement in operating systems.

The page table is a vital data structure in memory management that serves several key
purposes:

1. Address Translation: Maps virtual page numbers to physical frame numbers,


enabling conversion of virtual addresses to physical addresses.
2. Memory Management: Tracks which pages are loaded in physical memory,
optimizing memory usage.
3. Page Status Tracking: Maintains status bits (e.g., present/absent, modified) for each
page, important for page replacement and data integrity.
4. Page Replacement Facilitation: Assists in identifying which pages can be swapped
out when memory is full, enabling effective page replacement strategies.

CHRIST (Deemed to be University) 56


Operating System

WHAT IS SEGMENTATION?
Segmentation is a memory management technique that divides a program's memory into
variable-sized, logical segments, such as code, data, and stack.

CONCEPT OF SWAPPING IN MEMORY MANAGEMENT

Swapping in memory management is a technique that temporarily moves processes or data


between main memory (RAM) and secondary storage (e.g., hard disk) to free up memory
space for other processes.

• Process Migration: Transfers a process from RAM to disk to free memory for other
processes.
• Context Switching: Part of saving a running process's state when switching to
another process.
• Page Replacement: Works with virtual memory to swap out pages when memory is
full.

CHRIST (Deemed to be University) 57


Operating System

• Performance Impact: Excessive swapping can lead to thrashing, reducing system


performance.
• Comparison: Swapping moves entire processes, while paging deals with fixed-size
data blocks (pages).

Swapping enhances multitasking by managing limited RAM, but it requires careful control to
maintain performance.

CHRIST (Deemed to be University) 58


Operating System

CHAPTER 4: MEMORY MANAGEMENT & FILES CONCEPT

WHAT IS VIRTUAL MEMORY?


Virtual memory is a memory management technique that extends physical memory by using disk
space. It combines RAM and disk space to create a larger, continuous memory space, allowing
efficient memory use beyond physical limits.

Key Features:

• Extended Capacity: Allows applications to use more memory than physically available.
• Paging & Swapping: Moves inactive data to disk, loading it into RAM as needed.
• Efficient Usage: Keeps frequently accessed data in RAM, optimizing performance.

ROLE OF VIRTUAL MEMORY IN ENHANCING PROTECTION

Virtual memory protects system stability and security by isolating processes, enforcing
access controls, and separating user and kernel spaces. The following mechanisms are
incorporated:

1. Isolated Address Spaces: Each process has its own virtual address space, preventing
unauthorized access to another process's memory.
2. Separation of User and Kernel Space: User processes are isolated from critical
kernel data, reducing risks of interference with the operating system.
3. Access Control: Page tables include permission bits (e.g., read, write, execute) that
restrict operations on memory pages, ensuring only allowed actions are performed.

DIFFERENTIATE PHYSICAL MEMORY AND VIRTUAL MEMORY


Aspect Physical Memory Virtual Memory
Actual RAM installed in the An abstraction using disk space to extend
Definition
computer. memory.
Larger than physical memory, combining
Size Limited by hardware capacity.
RAM with disk storage.
Accessed through mapping (e.g., page tables)
Addressing Accessed directly by the CPU.
that translates addresses.
Managed by the operating system Managed through paging/segmentation,
Management
with fixed-size blocks. allowing more efficient use of memory.
Provides fast storage for active Enables multitasking and execution of larger
Purpose
processes. applications.
Provides isolation and protection through
Protection Less isolation between processes.
separate address spaces.

CHRIST (Deemed to be University) 59


Operating System

SWAP SPACE MANAGEMENT IN VIRTUAL MEMORY

Swap space management is essential in virtual memory management, acting as a buffer that
extends available memory, stores inactive data, and facilitates efficient process management.
It enhances performance, flexibility, and stability in a multitasking environment.

Swap space plays a crucial role in virtual memory management by serving as an overflow
area where data can be temporarily stored when physical memory (RAM) is full. Here’s how
it contributes to efficient process management:

Role of Swap Space

1. Extension of Virtual Memory:


o Swap space allows the operating system to extend the total available memory beyond
the physical RAM, enabling the execution of larger applications or multiple processes
simultaneously.
2. Storage for Inactive Data:
o When RAM is full, the operating system can move less frequently used data or entire
processes from RAM to swap space. This process, known as swapping or paging out,
frees up physical memory for active processes.
3. Facilitating Context Switching:
o Swap space helps manage context switches between processes. When a process is
paused, its memory state can be saved in swap space, allowing the system to load and
resume other processes without losing information.
4. Memory Management Efficiency:

CHRIST (Deemed to be University) 60


Operating System

o By using swap space, the operating system can keep the most active data in RAM
while offloading idle data to disk, improving overall memory utilization and system
responsiveness.
5. Preventing Out-of-Memory Conditions:
o Swap space provides a buffer against out-of-memory conditions, allowing the system
to handle temporary memory spikes without crashing or terminating processes.

Advantages of swap space:

• Improved Performance: Processes can run more smoothly since the system can
dynamically allocate memory as needed, reducing the chances of memory-related
bottlenecks.
• Flexibility: The ability to move processes and data between RAM and swap space allows for
greater flexibility in managing resources, particularly in multitasking environments.
• System Stability: By providing additional memory resources, swap space enhances the
stability and reliability of the operating system, ensuring that processes can continue running
even under heavy load.

DEMAND PAGING & ADDRESS TRANSLATION MECHANISM OF DEMAND PAGING

Demand paging is a memory management scheme that loads pages into memory only when
they are needed, optimizing memory usage and reducing loading times.

Features of demand paging:

1. Lazy swapper: Pages are loaded on demand rather than all at once.
2. Reduced Memory Footprint: Only necessary pages are loaded, allowing for more processes
to run simultaneously.
3. Page Faults: Occur when a process accesses a page not currently in memory, prompting the
operating system to load it from disk.

CHRIST (Deemed to be University) 61


Operating System

Address Translation Mechanism

1. Logical Address Space: Generated by the process, consisting of a page number and an offset.
2. Page Table: Maps logical page numbers to physical frame numbers and indicates if a page is
in memory (valid/invalid bits).

3. Translation Process:
o Page Table Lookup: The logical page number is used to check the page table.
o Page Present: If valid, calculate the physical address:
Physical Address=(Frame Number×Page Size)+Offset

CHRIST (Deemed to be University) 62


Operating System

o Page Fault Handling: If invalid, the OS loads the page from disk, updates the page
table, and resumes the process.

HARDWARE SUPPORT NEEDED FOR DEMAND PAGING

To support demand paging, the following hardware functions are essential:

1. Memory Management Unit (MMU): Translates logical addresses to physical addresses


using the page table.

2. Page Table: Stores mappings between logical page numbers and physical frame numbers,
including valid/invalid bits to indicate page presence in memory.

3. Translation Lookaside Buffer (TLB): A cache that stores recent address translations,
speeding up the translation process by reducing the need to access the page table.

4. Page Fault Handling: The CPU generates page fault interrupts when a page is not found
in memory, prompting the operating system to load the required page from disk.

ROLE OF PAGE TABLE IN MEMORY MANANGEMENT

The page table is a critical data structure in memory management with the following key
purposes:

• Address Translation: Maps logical page numbers to physical frame numbers in


memory, enabling the CPU to access the correct physical addresses.
• Page Status Tracking: Maintains valid/invalid bits to indicate if a page is in memory,
along with access permissions (read, write, execute).
• Support for Virtual Memory: Enables the implementation of virtual memory,
allowing processes to use more memory than physically available and facilitating on-
demand page loading.
• Page Fault Handling: Provides necessary information for the operating system to
manage page faults, including locating required pages on disk for loading into
memory.

HIERARCHICAL PAGING STRUCTURE


This hierarchical paging structure enables efficient memory management by using multiple levels of
page tables to translate virtual addresses into physical addresses, thus reducing memory
consumption and allowing for flexible handling of large address spaces.
Two level hierarchical page table:

CHRIST (Deemed to be University) 63


Operating System

How it works?

• Virtual Address: The address generated by the CPU that consists of a page number
and an offset.
• Level 1 Page Table: Contains Page Directory Entries (PDEs) that point to the second-
level page tables.
• Level 2 Page Table: Contains Page Table Entries (PTEs) that map to physical frames
in memory.
• Offset: Specifies the exact byte within the physical page to access.
• Physical Memory: The actual memory frames where the data is stored.

COMPARE AND CONTRAST PAGING AND SEGMENTATION AND THEIR ADVANTAGES


AND DISADVANTAGES

Aspect Paging Segmentation


Divides logical address space into Divides logical address space into variable-
Definition
fixed-size pages. sized segments.
Fixed-size pages can cause internal Variable-sized segments can lead to
Size
fragmentation. external fragmentation.
Memory Uses a page table for mapping Uses a segment table for mapping logical
Allocation logical pages to physical frames. segments to physical memory.

CHRIST (Deemed to be University) 64


Operating System

Aspect Paging Segmentation


Internal fragmentation occurs External fragmentation may occur as
Fragmentation
with partially filled pages. segments are allocated/deallocated.
Simpler management due to fixed More intuitive, reflecting program structure
Ease of Use
page sizes. (e.g., functions).
Provides protection at the page Allows protection and sharing at the
Protection
level; easier to share pages. segment level.
Large page tables may consume Segment tables can also be large but may
Overhead
significant memory. have reduced overhead.
More complex due to variable sizes and
Implementation Simpler due to fixed sizes.
segment management.

Advantages of Paging

• Simplicity: Easy to implement with fixed-size pages.


• No External Fragmentation: Eliminates external fragmentation.
• Efficient Memory Utilization: Works well with process working sets.

Disadvantages of Paging

• Internal Fragmentation: Wastes memory within the last page.


• Complex Address Translation: Requires multiple memory accesses for translation.

Advantages of Segmentation

• Logical Structure: Mirrors program organization for easier management.


• No Internal Fragmentation: Allows for efficient use of memory with variable sizes.
• Flexible Protection: Provides tailored protection for different segments.

Disadvantages of Segmentation

• External Fragmentation: Can lead to wasted memory over time.


• Complex Management: More complex to manage variable-sized segments.

Paging offers simplicity and eliminates external fragmentation at the cost of internal
fragmentation, while segmentation provides logical organization and flexible protection but
can suffer from external fragmentation and complexity. Many modern systems combine both
methods to maximize their benefits.

CHRIST (Deemed to be University) 65


Operating System

HOW PAGING HELPS IN MANAGING MEMORY IN AN OPERATING SYSTEM?

Paging optimizes memory management by allowing efficient use of physical memory,


supporting virtual memory, and simplifying allocation while providing process isolation and
security. The advantages are:

1. Elimination of External Fragmentation:


o Allows non-contiguous allocation of memory, thus eliminating external
fragmentation and making better use of available frames.
2. Support for Virtual Memory:
o Facilitates virtual memory, enabling processes to run even if their entire address
space isn't in physical memory by loading only necessary pages.
3. Simplified Memory Allocation:
o Fixed-size pages simplify the allocation and deallocation process, as frames can be
easily managed without the need for contiguous space.
4. Efficient Memory Use:
o Enables dynamic swapping of pages in and out of memory, allowing multiple
processes to share memory resources effectively.

WHAT IS PAGE FAULT? EXPLAIN THE STEPS IN HANDLING PAGE FAULT.

A page fault is an event that occurs when a program attempts to access a page of memory
that is not currently loaded in physical memory (RAM).

Steps in handling page fault:

The following steps efficiently handle page faults, ensuring processes can access required
pages while optimizing memory management.

CHRIST (Deemed to be University) 66


Operating System

1. Locate the Required Page:


o Consult the page table to find the location of the required page on disk.
2. Trap a Victim Page:
o If memory is full, choose a page to evict using a page replacement algorithm.
3. Write Victim Page to Disk:
o If the evicted page is modified, write it back to disk to save changes.
4. Load the Required Page into Memory:
o Read the needed page from disk into a free frame in physical memory.
5. Update the Page Table:
o Update the page table to map the logical page to the new physical frame and set the
valid bit.
6. Continue Execution:
o The process executes as if the page fault had not occurred, now with the required
page available.

PAGE REPLACEMENT ALGORITHMS


Page replacement algorithms are strategies used by operating systems to manage the
contents of the page table and determine which pages to swap out of physical memory when
new pages need to be loaded. Here are some of the most common page replacement
algorithms:

1. First-In-First-Out (FIFO)

• Description: Replaces the oldest page in memory, essentially following a queue


structure where the first page loaded is the first to be removed.
• Implementation: Maintains a simple queue of pages, with new pages added to the
back and pages removed from the front.

2. Least Recently Used (LRU)

• Description: Replaces the page that has not been used for the longest period of
time. The rationale is that pages used recently will likely be used again soon.
• Implementation: Can be implemented using a counter for each page or a stack to
track the order of page usage.

3. Optimal Page Replacement

• Description: Replaces the page that will not be used for the longest period of time
in the future. This is considered the most efficient algorithm, but it requires
knowledge of future requests, which is impractical in real systems.
• Implementation: Often used as a benchmark for evaluating other algorithms.

CHRIST (Deemed to be University) 67


Operating System

SAMPLE PAGE FAULT PROBLEM


Find the page faults using FIFO, Optimal and LRU for the reference string:
7,0,1,2,0,3,0,4,2,3,0,3,0,3,2,1,2,0,1,7,0,1 with 3 frames.

// Try for different number of frames also.

RANKING OF PAGE-REPLACEMENT ALGORITHMS


Belady’s Anomaly
Belady’s Anomaly is a phenomenon observed in certain page replacement algorithms, specifically the
First-In-First-Out (FIFO) algorithm, where increasing the number of page frames allocated to a
process results in an increase in the number of page faults.

CHRIST (Deemed to be University) 68


Operating System

Algorithm Page-Fault Rate Belady's Anomaly Rank (1-5)


Optimal Replacement Perfect No 5
Least Recently Used (LRU) Very Low No 4
First-In-First-Out (FIFO) High Yes 2

Ranking:

• Optimal Replacement (Rank 5):


o This algorithm is considered perfect as it has the lowest possible page-fault rate since
it replaces the page that will not be needed for the longest time in the future. It does
not suffer from Belady's anomaly.
• Least Recently Used (LRU) (Rank 4):
o LRU performs well and typically has a low page-fault rate because it replaces the least
recently accessed page. It does not suffer from Belady's anomaly.
• First-In-First-Out (FIFO) (Rank 2):
o FIFO can lead to a higher page-fault rate compared to LRU and Optimal, and it is
susceptible to Belady's anomaly, where increasing the number of frames can lead to
more page faults.

In summary, Optimal Replacement ranks the highest, followed by LRU, while FIFO ranks
lower due to its inefficiencies and the potential for Belady's anomaly.

WHAT IS THRASHING? CONTROLLING MEASURES TO THRASHING


Thrashing is a condition in operating systems where excessive paging occurs, causing a significant
decline in performance. This happens when a process spends more time swapping pages in and out
of memory than executing its instructions, leading to high page fault rates.

Drawbacks:

• Increased CPU Utilization: The CPU spends more time handling page faults rather than
executing processes.
• Poor System Performance: Overall system throughput is greatly reduced.

Control Measures by the OS:

1. Efficient Page Replacement Algorithms: Use algorithms like LRU or optimal to reduce page
faults.
2. Load Control: Limit the number of processes in memory to prevent excessive competition
for resources.
3. Increased Physical Memory: Upgrade system memory to provide more space for processes.
4. Process Swapping: Temporarily swap out non-essential processes to free up resources.
5. Thrashing Detection: Monitor page fault rates and take corrective action when thrashing is
detected.

CHRIST (Deemed to be University) 69


Operating System

FILE MANAGEMENT

WHAT IS A FILE?
A file is a fundamental entity in a computer operating system that represents a collection of data,
identified by a unique name, and managed by the file system to facilitate organized storage, retrieval,
and manipulation of information. a file is a named collection of related data or information that is
stored on a storage medium. Files serve as the basic unit of storage in an operating system, allowing
users and applications to store, retrieve, and manage data in a structured way.

LIST THE FILE ATTRIBUTES

File attributes in an operating system are metadata that describe the properties of a file
and help manage its storage and access. Common file attributes include:

• Name: The file's identifier, often including a file extension (e.g., document.txt).
• Type: Indicates the file format (e.g., text, image, executable).
• Location: The path to the file in the file system (e.g., /home/user/document.txt).
• Size: The total data size of the file, measured in bytes.
• Creation Time: Timestamp of when the file was created.
• Modification Time: Timestamp of the last modification made to the file.
• Access Time: Timestamp of the last time the file was accessed.
• Permissions: Access rights that determine who can read, write, or execute the file (e.g.,
owner, group, others).
• Owner: The user account that owns the file.
• Links: The number of hard links pointing to the file.
• Status Flags: Special characteristics like hidden or read-only.

FILE SYSTEM STRUCTURE

A file system structure organizes and manages data on a storage device through multiple
components and layers, enabling efficient file storage, retrieval, and management.

Key Components

• Boot Control Block: Stores essential data for system startup.


• Volume Control Block: Holds metadata about the file system, such as disk space and free
blocks.
• File Control Block (FCB): Contains file-specific data like size, permissions, and storage
locations.
• Directories: Organize files hierarchically, storing file metadata and paths.
• Data Blocks: The actual storage blocks where file data resides.

This structured hierarchy allows efficient, secure, and organized file access and management
across the system.

CHRIST (Deemed to be University) 70


Operating System

PURPOSE OF FILE ALLOCATION TABLE (FAT)

A File Allocation Table (FAT) is a data structure used by an operating system to manage the
locations of files on a disk. It acts as an index, storing information about each file's location
on the storage medium and enabling the OS to efficiently retrieve file data.

Key Functions:

1. Mapping Locations: Tracks each file’s data blocks on the disk.


2. Space Management: Identifies free and occupied blocks for efficient storage.
3. File Access: Provides quick navigation through blocks, even if non-contiguous.
4. Fragmentation Handling: Links scattered data blocks to support fragmented files.

FILE OPERATIONS: OPEN() AND CLOSE()

open() Operation: Establishes a connection between a program and a file for reading or
writing.

• Functionality:
o Specifies the mode of access (e.g., read, write).
o Returns a file descriptor (or handle) for subsequent operations.
o Allocates necessary resources for file operations.

close() Operation: Terminates the connection to an open file, ensuring proper cleanup.

• Functionality:
o Releases resources allocated during the open() operation.
o Flushes any buffered data to the storage medium, preserving data integrity.
o Updates file metadata (e.g., last modification time).

FILE CONTROL BLOCK & IT’S PURPOSE

A File Control Block (FCB) is a data structure used by an operating system to manage
information about a specific file in a file system. Its primary purposes include:

1. Metadata Storage: The FCB contains essential file information, such as:
o File Name: The name of the file.
o File Type: The format of the file (e.g., text, image).
o File Size: The size of the file in bytes.
o Timestamps: Creation and modification dates.
o Access Permissions: Rights for reading, writing, or executing the file.
2. File Location Management: It tracks the physical location of the file on the storage
medium, including disk block addresses where the file data is stored.
3. File Status Information: The FCB maintains the current status of the file, including:
o Open/Close Status: Indicates if the file is currently open.
o File Locks: Information on exclusive access locks.

CHRIST (Deemed to be University) 71


Operating System

4. Facilitating File Operations: The FCB is used for reading, writing, and managing
access control, ensuring secure and efficient file operations.

FILE FRAGMENTATION
Fragmentation in file systems refers to the condition where the storage space of a disk is inefficiently
utilized, leading to the division of files into non-contiguous segments. This can occur over time as
files are created, modified, and deleted, resulting in gaps or scattered pieces of free space on the disk.

Types of File fragmentation

1. Internal Fragmentation: This occurs when allocated memory blocks are larger
than the actual data being stored, leading to wasted space within those blocks.
2. External Fragmentation: This happens when free space is scattered throughout the
disk, preventing the allocation of contiguous blocks for new files or larger files, even
though there is enough total free space available.

TYPES OF FILE ACCESS METHODS

File access methods define how data is read from and written to files. The primary methods
are:

1. Sequential Access: Data is accessed in a linear order, one record after another.

• Characteristics:
o Suitable for files processed in order (e.g., log files).
• Advantages:
o Simple and efficient for large data processing.
• Disadvantages:
o Not suitable for random data access; slower if the desired data is at the end.

2. Direct (Random) Access: Data can be read or written at any location in the file.

• Characteristics:
o Requires fixed-length records; commonly used in indexed files.
• Advantages:
o Fast access times for specific records; flexible for random retrieval.
• Disadvantages:
o More complex to implement; may have overhead for management.

3. Indexed Access: An index maps keys to data locations, enabling quick searches.

• Characteristics:
o The index holds pointers to actual data records.
• Advantages:
o Fast access based on key values; supports efficient searching and sorting.
• Disadvantages:

CHRIST (Deemed to be University) 72


Operating System

o Additional storage overhead; can slow down write operations due to index updates.

4. Hashed Access: A hash function maps keys to specific file locations for fast retrieval.

• Characteristics:
o Uses a hash table to associate keys with data addresses.
• Advantages:
o Extremely fast lookups; efficient for unique key operations.
• Disadvantages:
o Collisions require handling; less efficient for range queries.

WHAT IS A DIRECTORY? WHAT IS THE PURPOSE OF A DIRECTORY?

A directory in a file system is a special file or structure that organizes and manages files and
other directories on a storage medium. Its primary purpose is to maintain a structured,
efficient, and user-friendly organization of files.

Purposes of a Directory in a File System:

1. File Organization: Directories organize files into a structured hierarchy, making it


easy for users and applications to locate, access, and manage files. This can include
creating subdirectories for nested organization.
2. Path Management: Directories support pathnames, providing unique paths for files.
Paths are essential for distinguishing files with the same name located in different
directories.
3. Metadata Storage: Directories store metadata about each file or subdirectory, such
as file names, sizes, permissions, creation/modification dates, and pointers to the
file's location on disk.
4. Access Control: Directories can enforce access permissions, restricting who can read,
write, or execute files within them, supporting security and user management.
5. Efficient File Operations: By organizing files, directories enhance file access
efficiency, reducing the time needed to locate and access files.

A directory in a file system serves as a structural component for organizing files, managing
paths, storing metadata, enforcing access control, and improving file operation efficiency.

WHAT IS DIRECTORY STRUCTURE? REAL WORLD EXAMPLES

A directory structure in a file system is an organized hierarchy for managing files and
folders on a storage device, allowing users and applications to easily locate, access, and
manage files. Directory structures are foundational to file management, enhancing
organization, access control, and ease of navigation.

CHRIST (Deemed to be University) 73


Operating System

Common Directory Structures:

1. Single-Level Directory: All files are contained in one directory, without subdirectories.
2. Two-Level Directory: Each user has a unique directory, containing their files.
3. Tree-Structured Directory: A hierarchical structure with a root directory and nested
subdirectories.
4. Acyclic-Graph Directory: Similar to a tree but allows files or directories to be shared by
different users.
5. General Graph Directory: Allows shared directories and supports cyclic links but needs
special handling to avoid loops.

Real-World examples of directory structures

1. UNIX/Linux File System (Tree-Structured Directory)

• Structure: The UNIX/Linux file system is a tree structure rooted at / (the root directory). It
branches into subdirectories like /home, /etc, /usr, and /var.
• Purpose: This structure organizes files based on functionality, user profiles, and system
configurations:
• Advantages: A tree structure is efficient for accessing files by pathnames and supports
complex hierarchical organization, making it ideal for systems with multiple users and
applications.

2. Windows File System (Acyclic-Graph Directory)

• Structure: Windows uses a combination of a tree structure and shortcut links (similar to
acyclic graphs). The root directories C:\, D:\, etc., represent different drives, and each drive
has its own tree structure.
• Purpose:
o The C:\ drive typically contains the Windows and Program Files directories.
o Users have directories under C:\Users, allowing personalized storage and settings.
o Windows shortcuts allow files and applications to be accessible from multiple
locations.
• Advantages: The structure allows shared resources without duplicating files and provides
flexibility in accessing commonly used files or applications from multiple locations.

PROCESS OF FILE SYSTEM IMPLEMENTATION

The process of file system implementation in an operating system involves multiple steps
and components to create a structured, efficient, and reliable system for managing files on
storage devices. The implementation of a file system in an operating system involves
partitioning and formatting the storage medium, managing free space, using file control
blocks for metadata storage, structuring directories, defining file allocation methods, and
optimizing performance.

Below is a detailed overview of the implementation process:

CHRIST (Deemed to be University) 74


Operating System

1. Disk Partitioning: The storage medium is divided into partitions, each of which can
contain a separate file system.

• Purpose: Allows for organizing data more effectively and can enable different file systems on
the same disk.

2. Formatting the File System: Each partition is formatted to create a specific file system
structure.

• Components Created:
o Boot Block: Contains information necessary for starting the operating system.
o Inode Table: Contains information about each file, including its attributes and
location on the disk.

3. Free Space Management: Tracks available and allocated space on the storage device.

• Methods:
o Bitmaps: A binary representation of blocks where 1 indicates used and 0 indicates
free.
o Linked List: A list of free blocks, where each block points to the next available block.
o Free Block Count: Keeps track of the number of available blocks for quick access.

4. File Control Block (FCB): A data structure that contains metadata for each file in the file
system.

• Attributes:
o File Name: The name assigned to the file.
o File Size: The total size of the file.
o Access Permissions: Read/write/execute permissions.
o Timestamps: Creation and modification dates.
o Location Pointers: Addresses of the blocks where the file data is stored.

5. Directory Structure Implementation: Organizes files into a hierarchy to facilitate easy


navigation and management.

• Types:
o Single-Level Directory: All files in one directory.
o Two-Level Directory: Each user has a separate directory.
o Tree-Structured Directory: A hierarchical structure with directories and
subdirectories.
o Acyclic-Graph Directory: Allows sharing of files among users without duplicating
them.

CHRIST (Deemed to be University) 75


Operating System

6. File Allocation Methods

• Methods:
o Contiguous Allocation: Files are stored in contiguous blocks, which allows for fast
access but may lead to fragmentation.
o Linked Allocation: Each file's blocks are linked in a chain, which eliminates
fragmentation but can slow down access.
o Indexed Allocation: Uses an index block to store pointers to data blocks, allowing
for random access.

7. Implementation of File Operations

• Basic Operations:
o Create: Allocate space and an FCB for the new file, updating the directory.
o Read: Access the FCB to retrieve data from the corresponding blocks.
o Write: Update data in the blocks and modify the FCB as necessary.
o Delete: Remove the file entry from the directory, mark the blocks as free, and
deallocate the FCB.

8. Security and Access Control: Implementing mechanisms to control access to files.

• Methods:
o Access permissions (read, write, execute) defined in the FCB.
o User authentication and authorization systems.

CHRIST (Deemed to be University) 76


Operating System

CHAPTER 5: DISK MANAGEMENT & FILE MANAGEMENT

PURPOSE OF DISK SCHEDULING IT’S IMPORTANCE

Disk Scheduling is the process of managing the order in which disk I/O requests are
processed by the operating system.

Purpose

1. Minimize Latency: Reduces the time taken for disk operations.


2. Increase Throughput: Allows more I/O operations to be completed in a given time.
3. Reduce Seek Time: Optimizes the movement of the read/write head to minimize
access time.
4. Ensure Fairness: Provides equal opportunities for all processes to access the disk.
5. Prioritize Requests: Addresses urgent or critical requests promptly.

Importance

• Performance Optimization: Essential for maintaining system responsiveness and


efficiency.
• Resource Management: Helps manage limited I/O resources in multitasking
environments.
• System Stability: Prevents bottlenecks and excessive wait times, ensuring smooth
operation.

In summary, disk scheduling is crucial for enhancing disk I/O performance and overall
system efficiency.

RELATIONSHIP BETWEEN SEEK TIME AND ROTATIONAL LATENCY

Seek Time and Rotational Latency are critical components of disk access time in hard disk
drives (HDDs):

Seek Time: The time taken for the read/write head to move to the correct track where the
data is located.

Rotational Latency: The time taken for the desired sector to rotate under the read/write
head after the head has reached the correct track.

Relationship

• Access Time Formula:

CHRIST (Deemed to be University) 77


Operating System

Access Time=Seek Time + Rotational Latency

• Performance Impact: Both factors contribute to the total delay in data retrieval.
Higher RPM disks reduce rotational latency but may not significantly affect seek time,
which depends on the mechanical movement of the head.

Together, seek time and rotational latency determine the overall access time for reading or
writing data on a disk. Optimizing both is essential for enhancing disk performance.

DISK SCHEDULING ALGORITHMS

Disk scheduling algorithms manage the order of disk I/O requests to optimize performance
by reducing seek time, minimizing latency, and maximizing throughput. The following are
some of the disk scheduling algorithms:

Problem:
Disk with 200 tracks and the request queue: 98, 183, 37, 122, 14, 124, 65, 67
Head starts at track 53 and moves toward lower-numbered tracks (to the left).
a. Calculate the total head movement using the FCFS, SSTF, SCAN, C-SCAN and C-LOOK
scheduling algorithm.
b. Find average seek time, if the seek time per track is 1 ms

FIRST-COME, FIRST-SERVED (FCFS):

o Processes requests in the order they arrive.


o Advantages: Simple implementation.
o Disadvantages: Can lead to long wait times.

CHRIST (Deemed to be University) 78


Operating System

Initial head position: 53


Sequence to service: 98, 183, 37, 122, 14, 124, 65, 67
Total head movement = 45+85+146+85+108+110+59+2=640

Seek time per track = 1 ms


Average seek time = (Total head movement X Seek time per track) / (No. of requests)
Average seek time = 640 X 1 / 8 = 80 ms

SHORTEST SEEK TIME FIRST (SSTF):

o Selects the request closest to the current head position.


o Advantages: Minimizes average seek time.
o Disadvantages: Can cause starvation for distant requests.

Total head movement = 12+2+30+23+84+24+2+59=236

Seek time per track = 1 ms


Average seek time = 236 X 1 / 8 = 29.5 ms

ELEVATOR ALGORITHM (SCAN)

o Moves in one direction servicing requests until it reaches the end, then
reverses.
o Advantages: Reduces seek time and provides uniform wait times.
o Disadvantages: Longer wait times for requests at the ends.

In the problem it is given as Head movement – Left

CHRIST (Deemed to be University) 79


Operating System

Total head movement = 16+23+14+65+2+31+24+2+59=236

Seek time per track = 1 ms


Average seek time = 236 X 1 / 8 = 29.5 ms

CIRCULAR SCAN (C-SCAN)

o Similar to SCAN, but returns to the other end without servicing on the return.
o Advantages: More uniform wait times.
o Disadvantages: May increase average wait time for end requests.

Total head movement = 16+23+14+199+134+2+31+24+2+59=504


Seek time per track = 1 ms
Average seek time = 504 X 1 / 8 = 63 ms

CHRIST (Deemed to be University) 80


Operating System

C-LOOK

o Moves to the furthest request in each direction without going to the end of the
disk.
o Advantages: More efficient than SCAN.
o Disadvantages: Still has potential for request starvation.

Total head movement = 16+23+51+2+31+24+2+59=208

Seek time per track = 1 ms


Average seek time = 208 X 1 / 8 = 26 ms

HOW DISK SCHEDULING ALGORITHMS ARE SELECTED IN DIFFERENT OS AND THE


FACTORS INFLUENCE THE CHOICE
Disk scheduling algorithms are chosen based on various factors related to system
requirements, workload patterns, and hardware characteristics. Different operating systems
(OS) might employ specific algorithms to optimize performance, considering the trade-offs
between efficiency, response time, and fairness. Selecting the right disk scheduling algorithm
depends on balancing these factors to match the specific goals of the operating system and
the applications it supports.
The following key factors influencing the choice of disk scheduling algorithms:

Algorithm Characteristics Typical Use Case


Simple, fair, may result in long wait Low-throughput systems, single-user OS,
FCFS
times or with sequential access

CHRIST (Deemed to be University) 81


Operating System

Algorithm Characteristics Typical Use Case


Minimizes seek time but can starve Systems needing quick access with a
SSTF
requests single process
Provides fairness, reduces head Multi-user systems, large disk
SCAN
movement in both directions environments, high-throughput systems
Fair, predictable, goes in one High-performance and real-time
C-SCAN
direction only systems, continuous data streams
Similar to SCAN but stops at last Multi-user, high-access systems with
LOOK
request varying request patterns
Similar to C-SCAN but stops at last High-performance, time-sensitive
C-LOOK
request environments

FCFS VS. SSTF DISK SCHEDULING ALGORITHMS

First-Come, First-Served
Feature Shortest Seek Time First (SSTF)
(FCFS)
Requests served in arrival
Scheduling Order Requests served based on closest track
order
Average Seek
Higher Lower
Time
Lower (can cause starvation for distant
Fairness High (no starvation)
requests)
Complexity Simple Slightly more complex
Systems where fairness is Systems prioritizing efficiency over
Ideal Use Case
key fairness

FCFS is simpler and fair but has higher seek times, while SSTF minimizes seek times at the
cost of potential request starvation.

CHALLENGES IN THE DISK MANAGEMENT STRATEGIES

Disk management in operating systems involves strategies like partitioning, file system
organization, disk scheduling, caching, data allocation, and fragmentation control. These aim
to optimize storage, access speed, and data integrity.

Effective disk management involves a combination of strategies tailored to the needs of the
operating system, hardware, and workload characteristics. While each strategy comes with
specific benefits, the challenges often revolve around balancing performance, data integrity,

CHRIST (Deemed to be University) 82


Operating System

and reliability, especially as storage technologies and data requirements evolve. As systems
continue to demand higher speeds and reliability, disk management must continue to
innovate and adapt to meet these challenges efficiently.

Major challenges in Disk Management:

1. Balancing Performance and Reliability


o Achieving both high performance and reliable data access is difficult.
Techniques like RAID or journaling help maintain data integrity but can add
latency and require additional processing power and storage overhead.
2. Handling Fragmentation
o Fragmentation is a persistent issue on disks that use file systems with non-
contiguous allocation. Regular defragmentation is required to maintain
performance, though it can be slow and disrupt system availability during the
process.
3. Scaling with Data Growth
o With ever-growing data volumes, disk management systems need to
efficiently handle large-scale data without degradation in performance. This
requires careful planning in allocation strategies, caching, and possibly
integration with scalable file systems or storage solutions.
4. Concurrency and Fairness
o Multi-user environments or systems with concurrent processes face
challenges in managing I/O requests fairly. Disk scheduling algorithms must
balance between optimizing throughput and ensuring that no process is
starved for access.
5. Data Recovery and Fault Tolerance
o Ensuring data can be recovered after failures is a complex task that involves
multiple strategies (e.g., RAID, journaling). However, fault tolerance
techniques often require redundancy, increasing storage costs, and rebuilding
after failures can significantly impact performance.
6. Transitioning to Modern Storage Technologies
o As SSDs storage increasingly replace traditional HDDs, disk management
strategies must adapt. Traditional disk scheduling and defragmentation
techniques aren’t as relevant for SSDs. However, SSDs introduce new
challenges, like managing limited write endurance and optimizing for low-
latency access.
7. Energy Efficiency
o Disk management strategies that optimize for energy efficiency are
particularly crucial in data centers and mobile devices. Techniques like
spinning down idle disks in HDDs or optimizing read/write cycles in SSDs can
save power, but need careful planning to avoid performance degradation.

Overall, disk management requires balancing efficiency, durability, and scalability to support
modern storage needs.

CHRIST (Deemed to be University) 83


Operating System

FILE SYSTEM IMPLEMENTATION

FREE SPACE MANAGEMENT OF FILES

Free-space management is crucial for efficient storage allocation on disks. It ensures that the
operating system can quickly locate and allocate free space for files, helping to optimize
space utilization and performance. Each technique has trade-offs, and the choice depends on
factors like disk size, file allocation patterns, and system performance requirements.

Here are the primary techniques of free-space management:

1. Linked Lists

Free blocks are linked together in a list, where each free block contains a pointer to the next
free block. The OS only needs to keep a pointer to the first free block.

Efficiency:

o Space Efficiency: Minimal overhead, as only pointers are needed within free
blocks.
o Performance Efficiency: Suitable for systems where files don’t require
contiguous blocks. However, finding multiple free blocks for larger files
requires traversal, which can be slow.
o Drawback: Not efficient for finding contiguous space or fast access to
scattered free blocks, making it less suitable for high-performance
requirements.

CHRIST (Deemed to be University) 84


Operating System

2. Bitmaps (Bit Vectors)

A bitmap is an array of bits where each bit represents a block on the disk. If a block is
free, its corresponding bit is set to 1; if occupied, it’s set to 0 (or vice versa, depending on
implementation).


• Efficiency:
o Space Efficiency: Requires minimal space, as each block is represented by a
single bit.
o Performance Efficiency: Scanning for free blocks is fast since the bitmap can
be checked in bulk, and multiple blocks can be identified at once. This is
particularly useful for locating contiguous blocks.
o Drawback: Bitmaps can become large for large disks, and sequential searches
can be slow if there is no contiguous free space.

3. Grouping: Grouping is an extension of linked lists. Instead of pointing to a single free block,
each free block points to a group of free blocks, typically a fixed-size set. The last block in
each group points to the next group of free blocks.

• Efficiency:
o Space Efficiency: More efficient than simple linked lists since fewer pointers
are needed.
o Performance Efficiency: Allows faster allocation of multiple blocks, as each
access provides a group, which is efficient for allocating larger files or chunks
of space.
o Drawback: Slightly more complex than linked lists, and fragmentation can
still occur.

4. Counting: In counting, the system tracks free blocks in terms of starting location and
length (or number of consecutive free blocks). Instead of tracking each block individually,
each entry represents a contiguous group of free blocks.

• Efficiency:
o Space Efficiency: Requires less space as only starting locations and counts are
stored for each group of blocks.

CHRIST (Deemed to be University) 85


Operating System

o Performance Efficiency: Highly efficient for finding and allocating


contiguous space, especially useful for large files, as it avoids the need for
traversing individual blocks.
o Drawback: If free space becomes fragmented, the efficiency drops since it
relies on contiguous blocks.

FILE SYSTEM MOUNTING

File system mounting is the process of linking a file system to a designated directory, called
the mount point, making it accessible in the operating system's directory structure.

Steps in Mounting:

1. Identify Device and File System: OS checks compatibility of the device and file
system type.
2. Verify Permissions: Ensures only authorized users can mount.
3. Validate Integrity: Checks file system structure for consistency.
4. Assign to Mount Point: Maps the file system to a directory for access.
5. Update System Tables: Adds mount information to system tables.
6. Access: Files are accessible under the mount point directory.

Unmounting safely disconnects the file system to prevent data loss.

Benefits: Provides seamless access, device flexibility, and simplifies data management
across multiple storage systems.

CHRIST (Deemed to be University) 86

You might also like