OS Important Unit Wise 5&10mark
OS Important Unit Wise 5&10mark
The process state diagram represents the various states a process goes through during its lifecycle
in an operating system. Here’s a simple sketch of the typical process states and transitions:
1. New → Ready : Process creation is complete, and it’s waiting for CPU time.
2. Ready → Running : The process scheduler allocates CPU to the process.
3. Running → Ready : The process is interrupted and goes back to the ready state (e.g., due to time-
slicing in a multitasking system).
4. Running → Blocked : The process waits for an I/O event or resource.
5. Blocked → Ready : The process receives the necessary resource or I/O completion and moves
back to the ready state.
6. Running → Terminated : The process completes its execution.
This diagram shows the basic states and transitions a process undergoes in an operating system.
2.The main components of an operating system (OS) are responsible for managing various resources
and providing essential services to applications. Here’s an overview of each component:
1. Kernel
- The core part of the OS, the kernel manages system resources and communication between
hardware and software.
- It handles memory, processes, and device management, acting as a bridge between applications
and the hardware.
2. Process Management
- This component manages processes in the system, including creation, scheduling, and
termination.
- It ensures fair CPU allocation, manages process states, and handles inter-process communication
(IPC).
3. Memory Management
- Responsible for managing the system’s RAM, allocating and deallocating memory to processes as
needed.
- Includes virtual memory management, which allows the system to use disk space as an extension
of RAM, enhancing performance.
- Provides directory structures, file permissions, and file operations (such as read, write, and
delete).
5. Device Management
- Also known as the I/O management, this component handles communication with all hardware
devices.
- Uses device drivers to provide a standardized interface for each device, ensuring smooth
hardware-software interaction.
3.The goals of an operating system (OS) can be broadly categorized into primary goals (essential for
the functionality of the OS) and secondary goals (aimed at improving user experience and
efficiency). Here’s an overview:
Primary Goals
1. Efficient Resource Management
2. Process Management and Multitasking
3. User Interface Facilitation
Secondary Goals
1) Convenience
2) Efficiency and Performance
3) Scalability
4) Flexibility and Adaptability
- Supports various types of hardware, enabling the OS to be installed on multiple types of
devices
- Ensures compatibility with a broad range of applications and devices, promoting a more adaptable
and future-proof system.
5. Portability
- Ensures that the OS can work across different hardware configurations with minimal changes,
allowing it to be adapted to new machines and devices easily.
-These goals collectively contribute to a system that is robust, efficient, and user-friendly, allowing
users and applications to make the most of the hardware capabilities.
4.CPU scheduling criteria are the standards by which the performance of CPU scheduling algorithms
is evaluated. Different scheduling algorithms optimize different criteria based on the system’s needs.
Here’s a breakdown of the most commonly considered CPU scheduling criteria:
1. CPU Utilization
- Definition: Measures the percentage of time the CPU is actively working on processes rather than
remaining idle.
- Goal: Maximize CPU utilization to keep the CPU busy and make the most out of system
resources.
- Example: A high CPU utilization means that most of the CPU’s time is spent processing tasks,
while low utilization indicates many idle periods.
2. Throughput
- Definition: The number of processes that complete their execution in a given period.
- Goal: Increase throughput to maximize the number of tasks processed over time.
- Example: If 10 processes complete every second, the throughput is 10 processes per second. High
throughput is desired in systems that need to process many tasks quickly.
3. Turnaround Time
- Definition: The total time taken for a process to complete from submission to completion,
including waiting, processing, and I/O time.
- Goal: Minimize turnaround time to improve response for batch jobs and complete tasks faster.
- Example: If a process is submitted at time 0 and finishes at time 20, its turnaround time is 20
seconds. Lower turnaround times are ideal for faster overall processing.
4. Waiting Time
- Definition: The time a process spends in the ready queue waiting to be executed by the CPU.
- Goal: Minimize waiting time to reduce delays and enhance performance, especially in systems
with time-sensitive tasks.
- Example: If a process waits for 10 seconds in the ready queue before getting CPU time, its
waiting time is 10 seconds. Lower waiting times help in reducing delays and ensuring better
performance.
5. Response Time
- Definition: The time between submitting a request and the first response from the system,
particularly relevant in interactive systems.
- Goal: Minimize response time to make the system feel more responsive, especially important in
real-time and interactive applications.
- Example: In a web server, if a request is made at time 0 and the server begins responding at time
2, the response time is 2 seconds. Lower response times are crucial in environments where quick
feedback is essential.
6. Fairness
- Definition: Ensures that all processes get an equitable share of CPU time and are not subjected to
starvation (indefinitely delayed).
- Goal: Provide a fair allocation of CPU time across all processes, balancing between short and
long tasks.
- Example: In a round-robin scheduling algorithm, fairness is ensured by assigning each process a
fixed time slice, preventing any single process from monopolizing the CPU.
Each of these criteria serves different types of systems (batch, interactive, real-time) and helps guide
the choice of scheduling algorithm based on the system’s specific goals and requirements.
ICP
5.Interprocess communication is the mechanism provided by the operating system that allows
processes to communicate with each other.
Types of Process
• Independent process
• Co-operating process
Independent process:
• An independent process is not affected by the execution of other processes
• Though one can think that those processes, which are running independently, will
execute very efficiently, in reality,
Co-operating process:
• While a co-operating process can be affected by other executing processes.
• When cooperative nature can be utilized for increasing computational speed, convenience,
and modularity.
Methods of IPC:
• Shared Memory• Message Passing
1)Shared Memory Method:
Ex: Producer-Consumer problem
There are two processes: Producer and Consumer .
The producer produces some items and the Consumer consumes that item. The two processes
share a common space or memory location known as a buffer.
ii) Messaging Passing Method:
In this method, processes communicate with each other without using any kind of shared
memory. If two processes p1 and p2 want to communicate with each other, they proceed as follows:
• Establish a communication link (if a link already exists, no need to establish it again.)
• Start exchanging messages using basic primitives.
• We need at least two primitives:
– Send (message, destination) or send (message)
– Receive (message, host) or receive (message)Synchronous and Asynchronous Message Passing:
A process that is blocked is one that is waiting for some event, such as a resource becoming
available or the completion of an I/O operation.
Blocking is considered synchronous and blocking send means the sender will be blocked
until
• Blocking send and blocking receive
• Non-blocking send and Non-blocking receive
• Non-blocking send and Blocking receive (Mostly used)
In Direct message passing:
The process which wants to communicate must explicitly name the recipient or sender of the
communication.
e.g. send(p1, message)
Indirect message passing:
processes use mailboxes (also referred to as ports) for sending and receiving messages
Examples of IPC systems
1.Posix : uses shared memory method.
2.Mach : uses message passing
3.Windows XP : uses message passing using local procedural calls
6.Operating systems (OS) can be classified into several types based on their properties and
functionalities. Here are the essential properties of different types of operating systems:
1. Batch Operating Systems
- Description: These systems execute jobs in batches without user interaction. Jobs are collected
and processed sequentially.
- Essential Properties:
- No Interaction: Once jobs are submitted, they run without further input from users.
- Job Scheduling: Uses job scheduling algorithms to optimize CPU utilization.
- Efficiency: Designed to handle large volumes of jobs efficiently by minimizing idle time.
- Fixed Priority: Jobs typically have fixed priorities, and there may be longer turnaround times for
low-priority jobs.
3. Priority Scheduling
• Description: Each process is assigned a priority, and the process with the highest
priority (lowest number) is executed first.
• Advantages:
o Flexibility in choosing which process to execute based on priority.
• Disadvantages:
o Can lead to starvation of lower-priority processes.
o Requires additional overhead to manage priorities.
• Example:
o Consider three processes: P1 (priority 2), P2 (priority 1), P3 (priority 3).
o Execution order: P2 → P1 → P3.
4. Round Robin (RR) Scheduling
• Description: Each process is assigned a fixed time slot (time quantum), and processes
are executed in a cyclic order. If a process does not complete within its time quantum,
it is moved to the end of the queue.
• Advantages:
o Fair allocation of CPU time to all processes.
o Good for time-sharing systems.
• Disadvantages:
o Context switching adds overhead.
o Performance highly depends on the size of the time quantum.
• Example:
o Consider three processes: P1 (burst time 5), P2 (burst time 3), P3 (burst time
8) with a time quantum of 2.
o Execution order: P1 (2 units) → P2 (2 units) → P3 (2 units) → P1 (3 units) →
P2 (1 unit) → P3 (6 units).
8.Operating systems (OS) provide a variety of services that facilitate the execution of programs,
manage hardware resources, and enable user interaction. Here’s a detailed explanation of the essential
operating system services:
1. Program Execution
- Description: The OS is responsible for loading programs into memory and executing them.
- Services Provided:
- Process Creation and Termination: The OS allows users to create and terminate processes,
managing their lifecycle.
- Program Loading: It loads program binaries into memory and prepares them for execution.
- Execution Management: The OS schedules and executes processes, ensuring they run efficiently
and in accordance with scheduling policies.
2. I/O Operations
- Description: The OS manages input and output devices, facilitating communication between the
computer and peripheral devices.
- Services Provided:
- Device Management: The OS controls devices such as disks, printers, and network interfaces,
providing a uniform interface for accessing hardware.
- Buffering and Caching: It may use buffers to store data temporarily during transfer between the
CPU and devices to optimize performance.
- Device Drivers: The OS includes device drivers that allow it to interact with hardware
components, abstracting the details from applications.
4. Communication
- Description: The OS facilitates communication between processes, whether they are on the same
machine or across a network.
- Services Provided:
- Inter-Process Communication (IPC): Mechanisms like pipes, message queues, shared memory,
and sockets for processes to communicate and synchronize.
- Network Communication: Provides protocols and services for networking, enabling data
exchange between systems.
6. Resource Allocation
- Description: The OS manages the allocation of hardware resources such as CPU time, memory
space, disk space, and I/O devices to processes.
- Services Provided:
- Resource Management: Tracks resource usage and allocates resources to processes based on
scheduling policies.
- Scheduling: Implements scheduling algorithms to determine which processes run at what times,
ensuring fairness and efficiency.
UNIT - 2
1.Deadlock avoidance is a crucial aspect of operating system design and plays an indispensable
role in upholding the dependability and steadiness of computer systems.
Safe State and Unsafe State:
• A safe state refers to a system state where the allocation of resources to each process
ensures the avoidance of deadlock.
• The successful execution of all processes is achievable, and the likelihood of a deadlock is
low.
Unsafe State:
An unsafe state implies a system state where a deadlock may occur.
The successful completion of all processes is not assured, and the risk of deadlock is high.
Deadlock Avoidance Algorithms
When resource categories have only single instances of their resources, Resource- Allocation
Graph Algorithm is used. In this algorithm, a cycle is a necessary and sufficient condition for
deadlock.
When resource categories have multiple instances of their resources, Banker’s Algorithm is
used. In this algorithm, a cycle is a necessary but not a sufficient condition for deadlock.
Resource-Allocation Graph Algorithm:
• Resource Allocation Graph (RAG) is a popular technique used for deadlock avoidance.• It is a
directed graph that represents the processes in the system, the resources available,
and the relationships between them.
• A process node in the RAG has two types of edges, request edges, and assignment edges.
The RAG technique is straightforward to implement and provides a clear visual representation
of the processes and resources in the system.
Banker’s Algorithm:
The banker’s algorithm is a deadlock avoidance algorithm used in operating systems.
It was proposed by Edsger Dijkstra in 1965.
It works by keeping track of the total number of resources available in the system and the number
of resources allocated to each process
The resources can be of different types such as memory, CPU cycles, or I/O devices
Characteristics of Banker’s Algorithm:
• If any process requests for a resource, then it has to wait.
• This algorithm consists of advanced features for maximum resource allocation.
• There are limited resources in the system we have.
Data Structures used to implement the Banker’s Algorithm:
1.Available
• It is an array of length m.
• It represents the number of available resources of each type.
• If Available[j] = k, then there are k instances available, of resource type Rj.
2.Max
• It is an n x m matrix which represents the maximum number of instances of each resource
that a process can request.
• If Max[i][j] = k, then the process Pi can request atmost k instances of resource type Rj.
3.Allocation
• It is an n x m matrix which represents the number of resources of each type currently
allocated to each process.
• If Allocation[i][j] = k, then process Pi is currently allocated k instances of resource type
Rj.
4. Need
• It is a two-dimensional array.
• It is an n x m matrix which indicates the remaining resource needs of each process.
• If Need[i][j] = k, then process Pi may need k more instances of resource type Rj to complete
its task.
Need[i][j] = Max[i][j] – Allocation [i][j]
2. Deadlock Avoidance
Deadlock avoidance requires the system to have additional information in advance about
which resources a process will request and release throughout its lifetime. One of the most
common algorithms used in deadlock avoidance is the Banker's Algorithm.
• Banker's Algorithm: This algorithm checks whether allocating a requested resource
will leave the system in a safe state. A safe state means that there is a sequence of all
processes where each process can finish execution with the remaining available
resources.
Advantages:
• Ensures that the system remains in a safe state and deadlock-free.
Disadvantages:
• Requires prior knowledge of the maximum resource requirements of each process,
which is often unrealistic.
• Can be computationally expensive due to frequent checks.
3. Deadlock Detection and RecoveryIn deadlock detection, the system does not attempt to prevent
deadlocks but instead
periodically checks for them. If a deadlock is detected, the system takes actions to recover
from it.
• Deadlock Detection Algorithm: This algorithm checks the system for a circular wait
condition. If a cycle is detected, it indicates a deadlock.
• Recovery Methods:
o Process Termination: Terminate one or more processes involved in the
deadlock until the cycle is broken.
o Resource Preemption: Preempt some resources from processes and allocate
them to other processes to break the deadlock.
Advantages:
• Allows for maximum resource utilization as deadlocks are only handled when they
occur.
Disadvantages:
• Regular checking for deadlocks can be computationally expensive.
• Recovering from a deadlock may involve terminating processes or rolling back
actions, which can lead to data inconsistency and other issues.
4. Deadlock Ignorance
Deadlock ignorance is the simplest approach where the operating system assumes that
deadlocks do not occur or occur very rarely and does nothing to detect or prevent them. This
is the strategy used by most operating systems, including UNIX and Windows.
Advantages:
• Simple to implement with no overhead of detecting or preventing deadlocks.
Disadvantages:
• Deadlocks may occur, and when they do, they may lead to system crashes.
3.A semaphore is a synchronization tool in operating systems used to manage concurrent processes. It
is essentially a variable used to control access to shared resources in a multi-process environment. By
using semaphores, an operating system can coordinate the activities of multiple processes to avoid
issues like race conditions, where two or more processes try to access shared resources
simultaneously.
Types of Semaphores
There are two primary types of semaphores:
1. Binary Semaphore: This semaphore can only take values 0 and 1, similar to a lock mechanism. It
allows or disallows access to a single shared resource.
2. Counting Semaphore: This semaphore can take a range of integer values. It is used when there are
multiple instances of a resource (e.g., multiple printers) and allows for a specified number of
processes to access the resource simultaneously.
How Semaphores Work
A semaphore typically has two primary operations:
- Wait (P operation): This operation decreases the semaphore value by 1. If the resulting value is
negative, the process is blocked until the semaphore is positive again.
- Signal (V operation): This operation increases the semaphore value by 1, potentially unblocking a
waiting process.
Uses of Semaphores
Semaphores are mainly used for:
1. Mutual Exclusion (Mutex): Ensures that only one process can access a critical section at a time.
2. Synchronization: Coordinates the order of execution among processes. For example, in producer-
consumer problems, semaphores can synchronize the actions of producers (adding items) and
consumers (removing items) to avoid conflicts.
3. Resource Management: Semaphores manage limited resources by keeping track of the available
number and blocking processes if the resources are fully occupied.
4.To solve the critical section problem, which occurs when multiple processes or threads access and
modify shared resources, three essential requirements must be met. These requirements ensure that
the operations on shared resources are performed without interference and data inconsistency. They
are:
1. Mutual Exclusion:
- Only one process can execute in its critical section (the code segment accessing shared resources)
at any given time. This prevents simultaneous access by multiple processes, avoiding data races.
2. Progress:
- If no process is in its critical section, then only those processes that wish to enter the critical
section should participate in deciding which will enter next. The selection cannot be indefinitely
postponed, and processes waiting should not be forced to wait unnecessarily.
Meeting these requirements allows for a reliable, consistent approach to managing access to shared
resources, which is crucial in a concurrent processing environment.
UNIT-3
Paging Segmentation
With the help of Paging, the logical With the help of Segmentation, the logical
address is divided into a page number and address is divided into section number and
page offset. section offset.
This technique may lead to Internal Segmentation may lead to External
Fragmentation. Fragmentation.
In Paging, the page size is decided by While in Segmentation, the size of the segment
the hardware. is decided by the user.
In order to maintain the page data, the In order to maintain the segment data, the
page table is created in the Paging segment table is created in the Paging
The page table mainly contains the The segment table mainly contains the
base address of each page. segment number and the offset.
In this technique, in order to calculate the In this technique, in order to calculate the
absolute address page number and the absolute address segment number and the offset
offset both are required. both are required.
3.Segmentation is a memory management scheme in operating systems that divides a
process’s memory into different segments based on the logical divisions of the
program, such as code, data, stack, and heap. Each segment represents a logically
separate part of the process with a specific purpose, allowing efficient use of memory
and isolation of different sections for better protection and sharing.
1. Logical Division:
- In segmentation, a process is divided into segments, each corresponding to a
specific part or function of the program. For example, typical segments include:
- Code Segment: Contains executable code or instructions.
- Data Segment: Holds static variables and constants.
- Stack Segment: Manages function calls, return addresses, and local variables.
- Heap Segment: Used for dynamic memory allocation.
2. Segment Table:
- Each process has a segment table that stores the base address and length (or limit)
of each segment.
- Each entry in the segment table represents a segment and includes:
- Base Address: Starting physical address where the segment is stored in memory.
- Limit: The length of the segment, which defines its size.
Advantages of Segmentation
1. Logical Organization:
- Segmentation aligns with how programmers logically divide a program (code,
data, stack), making it easier to manage and understand.
Disadvantages of Segmentation
1. External Fragmentation:
- Since segments are of variable sizes, there can be gaps in memory (external
fragmentation), as contiguous blocks of memory may not always be available for a
new segment.
If a logical address exceeds the segment’s limit, an error or trap is triggered to prevent
illegal memory access.
2. Page Table:
- Each process has a page table that maps logical page numbers to physical frame
numbers.
- The page table stores entries with:
- Page Number: Represents the page in the logical address space.
- Frame Number: The physical memory frame where the page is loaded.
- The page table allows for quick translation of logical addresses into physical
addresses.
3. Address Translation:
- A logical address in paging consists of:
- Page Number (p): Identifies which page in the logical memory is being accessed.
- Offset (d): Specifies the exact location within the page.
- To access a particular memory location, the operating system:
- Uses the page number to find the corresponding frame number from the page
table.
- Combines the frame number and offset to get the actual physical address in
memory.
How Paging Works
- When a process needs to access memory, the operating system divides the logical
address into a page number and an offset.
- It checks the page table to find the physical frame corresponding to the page number.
- The frame number and offset are then combined to create the physical address,
allowing the process to access memory at that location.
Example of Paging
Suppose we have:
- Logical memory divided into 4 pages, each of 1 KB.
- Physical memory divided into 8 frames, each of 1 KB.
If a process wants to access a logical address, say 2050, the address would be broken
down as follows:
Advantages of Paging
1. Eliminates External Fragmentation:
- Since pages and frames are fixed in size, there’s no need for contiguous memory
allocation, preventing external fragmentation.
Disadvantages of Paging
1. Internal Fragmentation:
- If a page does not fully occupy a frame, the remaining space is wasted, leading to
internal fragmentation.
Multilevel Paging
For large address spaces, multilevel paging divides the page table itself into multiple
levels, reducing memory used for page tables by only creating entries for parts of
memory in use.
1. Compile-Time Binding
- Description: Address binding happens during the compilation of the program.
The compiler generates absolute addresses (i.e., physical addresses) based on
where it assumes the program will reside in memory.
- When Used: This binding is used when the location of the process in memory
is known in advance and will not change.
- Disadvantage: Compile-time binding lacks flexibility because if the location of
the program needs to change, the program must be recompiled to update the
addresses.
2. Load-Time Binding
- Description: Address binding occurs when the program is loaded into memory.
The compiler generates relocatable code (logical addresses), and the loader
converts these logical addresses to physical addresses at load time.
- When Used: Load-time binding is suitable when the memory location of the
program is not known at compile time but remains fixed once the program is
loaded into memory.
- Advantage: It provides more flexibility than compile-time binding because the
program can be loaded at different locations without recompilation.
3. Execution-Time Binding
- Description: Address binding is deferred until the program is actually
executed, allowing the operating system to map logical addresses to physical
addresses dynamically during runtime. This requires hardware support, typically
through the Memory Management Unit (MMU).
- When Used: Execution-time binding is ideal in systems that use dynamic
memory allocation or swapping. It allows a program to move within memory
during execution.
- Advantage: This binding provides the most flexibility, enabling features like
virtual memory, where programs can be larger than physical memory and can be
moved around as needed.
UNIT-4
Example:
Consider 3 frames and a page reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5.
FIFO will replace the oldest page each time, often leading to frequent page
replacements.
Example:
With the same page reference string (1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5) and 3
frames, OPT would replace pages based on future needs, leading to fewer page
faults compared to other algorithms.
Example:
Using the same page reference string and 3 frames, LRU would track the most
recent use of each page and replace the least recently used, usually resulting in
fewer page faults than FIFO.
Example:
For the reference string and 3 frames, LFU would track access frequency and
replace the page accessed least frequently, which may or may not be optimal
depending on access patterns.
Example Summary
2.In an operating system, file access methods define how data within a file can be
read or modified. Each access method organizes and retrieves data differently,
catering to the varying needs of applications, from simple text processing to
complex database management. Here are the main types of file access methods:
1. Sequential Access
- Description: Sequential access reads or writes data in a linear order, one record
after another. This is the most basic and common method, suitable for files
accessed sequentially, like text files or log files.
- How It Works:
- The system starts from the beginning of the file and progresses through each
record in sequence.
- Once a record is read or written, the file pointer automatically moves to the
next record.
- Operations: The main operations are read next (moves the pointer to the next
record and reads it) and write next (appends data at the end of the file).
- Advantages:
- Simple to implement and manage.
- Efficient for tasks like reading logs or processing large text files from start to
end.
- Disadvantages:
- Not ideal for scenarios requiring frequent access to arbitrary records within a
file, as accessing a specific record requires reading through all preceding records.
- Example: Text editors and batch processing programs often use sequential
access.
3. Indexed Access
- Description: Indexed access combines sequential and direct access by creating
an index, which is a data structure containing pointers to records within the file.
This index enables quick access to records, similar to an index in a book.
- How It Works:
- An index file is created alongside the data file, containing keys and pointers
to corresponding records in the data file.
- The index can be searched (often using techniques like binary search),
allowing fast location of any record.
- Once the location is found in the index, the pointer provides direct access to
the data file.
- Operations: The main operations include searching the index (to find the
record pointer) and direct access to the record using the pointer.
- Advantages:
- Fast access to records based on key values, making it efficient for searching
and retrieval.
- Supports both sequential and random access methods, making it versatile.
- Disadvantages:
- Requires additional storage for the index, increasing the file size.
- If the index becomes too large, it can slow down access times and require its
own management.
- Example: Used in large databases and systems where efficient retrieval based
on keys (e.g., customer IDs) is essential.
4. Hashed Access
- Description: Hashed access (or hashing) is a technique where a hash function
computes an address (hash value) based on a key. This address is then used to
directly access the record in the file.
- How It Works:
- A hash function is applied to a key to compute an address where the record
should be stored or retrieved.
- The computed address is where the data is placed in the file.
- When the same key is used again, the hash function produces the same
address, allowing quick access to the record.
- Collisions (when two keys produce the same address) are managed using
methods like open addressing or chaining.
- Operations: The main operations include hash computation (to find the
location based on the key), read and write operations (at the computed address).
- Advantages:
- Extremely fast access for retrieval, as there is no need to search through
records sequentially.
- Ideal for applications requiring constant-time access to records.
- Disadvantages:
- Collisions require careful handling, adding complexity.
- Not suitable for applications where records need to be accessed in sequential
order.
- Example: Hash tables are used in file systems and databases for quick look-up
operations, such as storing and retrieving user accounts based on unique IDs.
2. Page Table:
- Each process has a *page table*, which maintains the mapping between pages
(logical memory) and frames (physical memory).
- The page table entry (PTE) contains the frame number where a page is stored.
- When a process needs to access a memory address, the operating system looks
up the page table to determine the frame number and translates the logical address
into a physical address.
4. Advantages of Paging:
- No External Fragmentation: Paging allocates memory in fixed blocks, so
there's no problem of scattered free space.
- Efficient Memory Use: Processes can be loaded into non-contiguous memory,
making better use of available RAM.
- Easier Memory Allocation: As memory is allocated in fixed-size blocks, it
simplifies allocation and management.
5. Disadvantages of Paging:
- Page Table Overhead: Each process has a page table, which can consume
significant memory, especially with large address spaces.
- Internal Fragmentation: Some pages may not fully utilize the allocated frame,
leading to wasted space within each frame.
- Translation Overhead: Each memory access requires an additional lookup in
the page table, which can slow down performance. However, this is often
mitigated by using a Translation Lookaside Buffer (TLB).
4.File allocation methods are strategies used by file systems to manage and
organize how files are stored on disk storage, such as hard drives or SSDs. These
methods ensure that files are stored efficiently, allowing for fast access and
minimal wasted space. The main file allocation methods are contiguous allocation,
linked allocation, and indexed allocation. Let's explore each one in detail.
1. Contiguous Allocation
In contiguous allocation, each file is stored in consecutive blocks on the disk. This
method is simple and fast because reading from or writing to a file is easy, as all
blocks are in a sequence.
- How it works: When a file is created, the system searches for a contiguous block
of free storage space large enough to hold the entire file. The file metadata stores
the starting block number and the length (in blocks).
- Advantages:
- Fast Access: Reading or writing files is efficient because the disk head doesn’t
need to move much, resulting in faster sequential access.
- Simple Management: The only metadata needed is the starting block and the
length of the file.
- Disadvantages:
- External Fragmentation: As files are created and deleted, it becomes
challenging to find large enough contiguous free spaces, causing fragmentation
over time.
- File Size Limitation: If a file needs to grow, it may not be possible to extend it
if adjacent blocks are occupied.
- Example: Imagine a disk where a file takes up blocks 0-4, and another file starts
at block 5. If the file at blocks 0-4 needs to grow, it might require moving the file
to another part of the disk where a larger contiguous space is available.
2. Linked Allocation
In linked allocation, each file is stored in separate blocks scattered throughout the
disk, and each block contains a pointer to the next block in the file sequence.
- How it works: Each file has a starting block. In each block, a pointer is stored
pointing to the next block in the sequence. The last block of the file has a special
pointer, such as `NULL`, to indicate the end of the file.
- Advantages:
- No External Fragmentation: Files do not require contiguous blocks, so space
utilization is better even if there are small free blocks scattered throughout the
disk.
- Dynamic File Size: Files can grow by adding more blocks to the chain, as long
as there are free blocks on the disk.
- Disadvantages:
- Slow Access Time: Accessing a specific block within a file requires traversing
the entire chain, making random access slower.
- Pointer Overhead: Each block needs to store a pointer, which consumes disk
space.
- Reliability Issues: If a block is damaged or the pointer is lost, access to the rest
of the file can be compromised.
- How it works: When a file is created, an index block is allocated for it. This
block holds a list of all the disk blocks that the file occupies. Each entry in the
index block points to one block of the file. For example, if the index block has
entries for blocks 5, 12, and 18, these are the blocks where file data is stored.
- Advantages:
- Direct Access: Each block can be directly accessed through the index, making
random access fast.
- Dynamic File Size: The file can grow by adding new blocks and updating the
index, without needing contiguous space.
- No External Fragmentation: Blocks can be scattered across the disk, so free
space utilization is more efficient.
- Disadvantages:
- Index Block Overhead: An index block is needed for every file, consuming disk
space.
- File Size Limitation: If the index block has a limited number of pointers, it
restricts the file size unless multi-level indexing is used (e.g., indirect blocks in
UNIX file systems).
- Example: If a file occupies blocks 2, 5, and 8, the index block will contain
pointers to these blocks. To access data, the system reads the index block and
retrieves the pointers, enabling direct access to each block in the sequence.
- How it works: In this method, the file’s index block may point not only to data
blocks but also to other index blocks, creating a multi-level tree structure. For
example, there might be a primary index block that points to secondary index
blocks, which then point to the actual data blocks.
- Advantages:
- Support for Large Files: Multiple levels of indexing allow files to be very large,
as the system can handle more blocks through the hierarchical indexing structure.
- Efficient Access for Small Files: Small files may use a single-level index, while
large files benefit from additional levels.
- Disadvantages:
- Complexity: Multi-level indexing increases complexity in file management.
- Increased Overhead: Additional index blocks require more storage and may
reduce performance due to additional accesses.
UNIT-5
2. Authentication Factors
Authentication is based on one or more of the following factors:
- Something You Have (Possession): This includes physical devices such as smart
cards, USB tokens, or mobile devices. Examples include a one-time password
(OTP) generated by an app or sent via SMS.
a) Password-Based Authentication
- Overview: Passwords are the most common form of authentication and are a
type of knowledge-based authentication.
- Advantages: Simple to implement and widely understood.
- Disadvantages: Passwords are prone to being guessed, stolen, or compromised.
Users often choose weak passwords or reuse passwords across multiple accounts,
which makes them vulnerable to attacks.
b) Two-Factor Authentication (2FA) and Multi-Factor Authentication (MFA)
- Overview: 2FA requires two factors for authentication, typically combining
something you know (password) with something you have (an OTP sent to a
phone). MFA extends this to use two or more factors.
- Advantages: Stronger than single-factor authentication, as it requires more than
one form of verification.
- Disadvantages: Can be inconvenient if additional factors (such as mobile
devices) are unavailable or if SMS/OTP-based authentication is compromised by
social engineering attacks.
c) Biometric Authentication
- Overview: This method uses unique biological characteristics (e.g., fingerprints,
facial recognition, iris scans) for authentication.
- Advantages: High level of security and convenience; biometrics are difficult to
duplicate.
- Disadvantages: Can be costly to implement, and some biometric data can be
spoofed with high-tech methods. Privacy concerns arise if biometric data is stolen
or misused.
d) Token-Based Authentication
- Overview: Tokens are physical devices or software-generated codes that grant
access. Hardware tokens may display an OTP, or software tokens may be sent to
an app or device.
- Advantages: Enhances security by using a dynamic code that changes regularly.
- Disadvantages: Tokens can be lost or stolen, and hardware tokens may incur
additional costs.
e) Certificate-Based Authentication
- Overview: Digital certificates use cryptographic methods to verify identity.
Commonly used in enterprise environments and public key infrastructure (PKI)
systems.
- Advantages: Highly secure, especially for remote access and secure
communication (e.g., SSL/TLS).
- Disadvantages: Requires complex infrastructure and management of certificates.
- OpenID Connect (OIDC): An identity layer on top of OAuth 2.0 that verifies
user identity, commonly used for SSO.
- Phishing and Social Engineering: Attackers may use phishing tactics to trick
users into revealing authentication credentials, especially in password-based and
OTP methods.
- Credential Theft: Data breaches can lead to large volumes of passwords being
stolen. This has led to an increase in credential stuffing attacks, where attackers
attempt to use stolen credentials on other systems.
Definition
A monitor is an abstract data type that encapsulates shared variables, procedures
(methods), and synchronization mechanisms. It allows threads or processes to
safely execute code that accesses shared resources while preventing race
conditions and ensuring mutual exclusion.
Structure of a Monitor
A monitor typically consists of the following components:
1. Shared Variables: The data members that are shared among the processes or
threads that use the monitor.
2. Procedures: Functions or methods defined within the monitor that provide
access to the shared variables. These procedures are the only means of interacting
with the shared data.
3. Synchronization Constructs:
- Condition Variables: Used to block a process or thread until a particular
condition is met. They allow threads to wait for certain conditions to be true
before proceeding.
- Mutex Locks: Ensures mutual exclusion when a process or thread is executing
a monitor procedure. Only one process can execute a monitor procedure at a time.
Operations in a Monitor
Monitors provide two main types of operations:
2. Condition Variables: These allow threads to wait within the monitor until
certain conditions are satisfied:
- Wait: A thread can call a wait operation on a condition variable, releasing the
monitor's lock and entering a waiting state until another thread signals it to
continue.
- Signal: A thread can signal a condition variable to wake one waiting thread (if
any) and allow it to re-acquire the monitor's lock.
Benefits of Monitors
1. Encapsulation: Monitors encapsulate shared data and synchronization,
providing a clean interface for interaction while hiding the complexity of
synchronization mechanisms.
Limitations of Monitors
1. Complexity in Implementation: While monitors simplify synchronization, they
can still introduce complexity in design, particularly for large systems or when
multiple monitors interact.
1. Type of Operation:
- Input Devices: These devices send data to the computer (e.g., keyboard,
mouse, scanner).
- Output Devices: These devices receive data from the computer (e.g., monitor,
printer, speakers).
- Storage Devices: These devices can both send and receive data (e.g., hard
drives, USB flash drives).
2. Speed:
- Different I/O devices operate at varying speeds. For example, a hard disk drive
is slower compared to RAM, while SSDs offer faster data access than traditional
HDDs.
3. Data Transfer Method:
- Serial vs. Parallel: Serial devices send data one bit at a time, while parallel
devices send multiple bits simultaneously. For instance, USB interfaces typically
work in a serial manner.
4. Interface Type:
- Devices may use various communication protocols such as USB, SATA, or
Ethernet, which dictate how data is transmitted and received.
5. Buffering:
- I/O devices often use buffers (temporary storage areas) to accommodate
differences in processing speed between the device and the CPU.
6. Device Control:
- I/O devices have specific control mechanisms that may include device drivers,
which provide the necessary interface between the device and the operating
system.
7. Error Handling:
- I/O devices need mechanisms to detect and handle errors that may occur
during data transfer.
1. Efficiency:
- Maximize the use of system resources by managing I/O operations effectively.
This includes optimizing data transfer rates and minimizing idle time for devices.
2. Fairness:
- Ensure that all processes have fair access to I/O devices, preventing any single
process from monopolizing resources.
3. Simplicity:
- Provide a simple interface for application programs to interact with I/O
devices, abstracting the complexity of device management.
4. Reliability:
- Implement error detection and correction mechanisms to ensure reliable data
transmission and integrity during I/O operations.
5. Security:
- Protect data during transfer and restrict access to sensitive devices to
authorized processes.
The application I/O interface serves as a bridge between application programs and
the hardware devices. Here are some key aspects:
1. System Calls:
- Applications interact with I/O devices using system calls (e.g., read, write).
These are predefined functions that applications use to request services from the
operating system.
2. Device Independence:
- The interface should hide the specifics of the underlying hardware, allowing
applications to perform I/O operations without knowing details about the device.
3. Buffering:
- The interface may include buffering mechanisms to handle differences in data
processing rates, enabling smooth data flow between applications and devices.
4. Error Handling:
- It provides mechanisms for applications to handle errors that may occur during
I/O operations, such as file not found or device not ready.
5. Access Control:
- The interface manages permissions and access control to prevent unauthorized
access to I/O devices.
I/O systems encompass all components and processes involved in managing input
and output operations within a computer system. Key elements include:
1. I/O Devices:
- Physical hardware components (e.g., keyboards, mice, printers, network
interfaces) that facilitate user interaction and data processing.
2. Device Drivers:
- Software components that enable the operating system to communicate with
hardware devices. Each driver is specific to a particular device and translates
general I/O requests into device-specific commands.
3. I/O Scheduling:
- The process of managing the order in which I/O requests are serviced.
Scheduling algorithms (e.g., FIFO, Shortest Seek Time First) are used to optimize
performance and ensure fair access.
6. Interrupt Handling:
- I/O devices generate interrupts to signal the CPU when they need attention.
The operating system must efficiently handle these interrupts to ensure timely
processing of I/O requests.
8. Network I/O:
- Involves managing input and output over networks, including protocols, data
transmission, and error handling in network communications.