0% found this document useful (0 votes)
4 views34 pages

OS PYQs SOL.

The document covers key concepts in Operating Systems, including objectives and functions of OS, differences between processes and threads, Resource Allocation Graphs, file attributes and operations, virtual memory, file allocation methods, deadlock conditions, memory fragmentation, and multithreading models. It explains how OS manages resources, memory, and processes, while also detailing the advantages of techniques like virtual memory and multithreading. Additionally, it discusses the implications of fragmentation and the importance of efficient resource allocation.

Uploaded by

cajivo8351
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views34 pages

OS PYQs SOL.

The document covers key concepts in Operating Systems, including objectives and functions of OS, differences between processes and threads, Resource Allocation Graphs, file attributes and operations, virtual memory, file allocation methods, deadlock conditions, memory fragmentation, and multithreading models. It explains how OS manages resources, memory, and processes, while also detailing the advantages of techniques like virtual memory and multithreading. Additionally, it discusses the implications of fragmentation and the importance of efficient resource allocation.

Uploaded by

cajivo8351
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

OS Theory Questions

1. What are the various objectives and functions of Operating Systems? [DEC
22/MAY 24]
- An OS is a foundational software program that manages a computer’s hardware
and software resources, acting as a bridge between users and the hardware.
- Objectives (include details):
o Efficient resource utilization, Convenience to the user, System performance,
Security and protection, Support for multiprogramming and multitasking,
Error detection and handling, Controlled access to files and devices, Job
scheduling.
- Functions (include details):
o Process management, Memory management, File system management,
Device management, Storage management, Security and access control,
Error handling, System performance monitoring.

2. Differentiate between process and threads. [DEC 22/MAY 24]


Process Threads
Process means a program in
Thread means a segment of a process.
execution.
A process takes more time to
A thread takes less time to terminate
terminate.
It takes more time for creation. It takes less time for creation.
Takes more time for context switching. Takes less time for context switching.
Less efficient in terms of
More efficient in terms of communication.
communication.
Evey process runs in its own memory. Threads share memory.
Process is heavyweight compared to a A thread is lightweight as each thread in a
thread. process shares code, data, and resource.
If one process is blocked, then it will
If a user-level thread is blocked, then all
not affect the execution of other
other user-level threads are blocked.
processes.
Thread has Parents' PCB, its own Thread
A process has its own Process Control
Control Block, and Stack and common
Block, Stack, and Address Space.
Address space.
A process does not share data with
Threads share data with each other.
each other.

1
3. Explain about Resource Allocation Graph (RAG). [DEC 22]
- A Resource Allocation Graph (RAG) is a visual tool used in operating systems to
illustrate how resources are assigned to processes and which processes are
waiting for resources. It helps in detecting potential deadlocks more intuitively
than tables.
- Components of RAG:
o Vertices:
o Process vertex: represented as a circle.
o Resource vertex: represented as a rectangle.
 Single instance resource: only one instance is available.
 Multiple instance resource: multiple instances can be shared among
processes.
o Edges:
o Request edge: from process to resource, indicating a request.
o Assignment edge: from resource to process, indicating the resource is
assigned.
- Illustration:

- Key point:
o RAG helps visualize how resources and processes interact.
o It is useful for deadlock detection by showing cycles or wait conditions.
o While Banker's algorithm uses tables for deadlock avoidance, RAG is
preferred for visual clarity in deadlock detection.

2
4. Explain about file attributes, file operations, and file types. [DEC 22]
- File is a logical storage unit of information.
- Each file has characteristics like file name, file type, date, etc. These
characteristics are referred to as file attributes. They are:
o Name – File name is name given to file. It is a string of characters
o Identifier – Identifier is a unique number for a file. It identifies files within file
system. It is not readable to us, unlike file names.
o Type – Type of file specifies the type of file such as archive file (.zip),
source code file (.c, .java), .docx file, .txt file, etc.
o Location – Specifies location of file on device (directory path). This attribute
is a pointer to a device.
o Size – Specifies current size of file (in Kb, Mb, Gb, etc.) & possibly the
maximum allowed size of file.
o Protection – Specifies information about Access control (Permissions about
who can read, edit, write & execute the file). Provides security to sensitive &
private information.
o Time, date & user identification – tells about date & time on which file was
created, last modified, created & modified by which user, etc.
- A set of essential tasks & actions directed at files & directories residing within a
computer’s file system are referred to file operations. They are:
o File Creation & Manipulation: creating files, creating directories, opening
files, reading files, writing files, renaming files & directories, deleting files &
directories
o File Organization & Search: copying files, moving files, searching for files
o File Security & Metadata: file permissions, file ownership, file metadata
o File Compression & Encryption: file compression, file encryption
- File types:

3
5. What is virtual memory? Mention its advantages. [DEC 22/DEC 23]
- Virtual Memory is a memory management technique used by operating systems
to provide the illusion of a large, continuous memory space to programs, even if
the actual physical memory (RAM) is limited.
- Key points:
o Virtual Addressing: Programs use virtual addresses instead of physical
addresses. These are translated into real memory addresses during
execution.
o Removes the need for programmers to manually manage memory
hierarchy (RAM vs disk).
o Allows programs to use larger memory spaces than physically available.
o Efficient memory sharing when multiple programs run.
o Prevents programs from interfering with each other.
o Eliminates the limitation of small physical memory.
- How virtual memory works:
o Divides memory into fixed-size pages.
o Pages are stored on disk and loaded into RAM as needed.
o MMU (Memory Management Unit) performs virtual-to-physical address
translation using page tables.
o If a required page isn’t in RAM, a page fault occurs, and the OS loads the
page from disk.
- Techniques involved(explain both in brief):
o Paging
o Segmentation
- Advantages:
o Programs are not limited by physical RAM.
o More programs can run simultaneously (better multiprogramming).
o Improves CPU utilization and system throughput.
o Reduces I/O operations for loading/swapping.

6. a) Explain file allocation methods in detail with proper diagram [DEC


22/MAY 24]
b) Describe implementation of file allocation techniques. [DEC 23/DEC 24]
c) Write short note on: File Allocation methods [DEC 23]
- Files are divided into logical blocks, OS or file management system is responsible
for allocating blocks to files. Space is allocated to a file as one or more portions
(contiguous set of allocated disk blocks)
- File allocation methods: contiguous allocation, chained allocation, indexed
allocation
- Contiguous allocation:
o A single contiguous set of blocks is allocated to a file at time of file creation.
It is a pre-allocation strategy using variable size portions. File allocation
table needs just a single entry for each file showing starting block & length
of file. Best method from point of view of individual sequential file. Multiple
4
blocks can be read in at a time to improve I/O performance for sequential
processing. It is easy to retrieve a single block.

o Disadvantages – external fragmentation will occur making it difficult to find


contiguous blocks of space of sufficient length (compaction algorithm will be
necessary to free up additional space on disk), due to pre-allocation it is
necessary to declare size of file at time of creation.
- Chained allocation:
o Allocation is on individual block basis, each block contains a pointer to next
block in chain. File table needs just a single entry for each file showing
starting block & length of file. Pre-allocation is possible, it is more common
to allocate blocks as needed. Any free block can be added to the chain.
Blocks need not be contiguous. Increase in file size is always possible if a
free disk block is available. No external fragmentation because only one
block at a time is needed but there can be internal fragmentation (only in
last disk block of file)

5
o Disadvantages – internal fragmentation exists in last disk block of file,
overhead of maintaining pointer in every disk block, if pointer of any disk
block is lost file will be truncated, supports only sequential access of files.
- Indexed Allocation:
o Addresses many problems of contiguous & chained allocation. File
allocation table contains a separate one-level index for each file: index has
one entry for each block allocated to file. Allocation may be on basis of
fixed-size blocks or variable-sized blocks. Allocation by blocks eliminates
external fragmentation whereas allocation by variable-size blocks improves
locality. It supports both sequential & direct access to file & is most popular
form of file allocation.

o Disadvantages – pointer overhead for indexed allocation is greater than


chained allocation, for very small files (expanding only 2-3 blocks) indexed
allocation would keep one entire block for pointers which is inefficient in
terms of memory utilization.
- Comparison:
Contiguous Chained Indexed
Pre-allocation Necessary Possible Possible
Fixed or variable Variable-size Fixed-size Fixed-size Variable-size
size portions blocks blocks blocks blocks
Portion size Large Small Small Medium
Allocation
Once Low to High High Low
frequency
Time to allocate Medium Long Short Medium
File allocation table
One entry One entry Large Medium
size

6
- Implementation:
o Contiguous allocation:
Directory Entry:
Filename | Start Block | Length
o Chained allocation:
Block: [Data | Pointer to Next Block]
Directory Entry:
Filename | Starting Block
o Indexed allocation:
Directory Entry:
Filename | Index Block Address

Index Block:
[Block 1 | Block 2 | Block 3 | …]

7. a) Give the explanation of necessary conditions for deadlock. Explain how a


resource allocation graph determines a deadlock. [DEC 22/MAY 24]
b) Write short note on: Necessary conditions for deadlock [MAY 23]
c) What is a critical region? Explain necessary conditions for deadlock.
[DEC 23]
- Conditions for deadlock (Coffman conditions):
o Mutual exclusion – at least one resource must be held in a non-sharable
mode i.e. only one process at a time can use resource. If another process
requests that resource then the requesting process must be delayed until
resource has been released.
o Hold & wait – a process must be holding at least one resource & waiting to
acquire additional resources that are currently being held by other
processes.
o No pre-emption – Resources cannot be pre-empted, i.e. a resource can be
released only voluntarily by the process holding it, after that process has
completed its task (not forcefully).
o Circular wait – set {𝑃 , 𝑃 , … 𝑃 } of waiting process must exist such that 𝑃 is
waiting for a resource that is held by 𝑃 , & 𝑃 is waiting for a resource held
by 𝑃 …..𝑃 is waiting for a resource held by 𝑃 , & 𝑃 is waiting for
resource held by 𝑃 .
- How a RAG determines a deadlock?
o Build the RAG using the current state(include Q3’s theory – RAG):
 Draw all the processes and resources.
 Add request edges for resources being requested.
 Add assignment edges for resources currently held by processes.
o Check for Cycles:
 If the graph has no cycles → no deadlock.
 If the graph has a cycle:
 One instance per resource → Deadlock exists.

7
 Multiple instances per resource → A cycle may or may not indicate a
deadlock.
- Critical region:
o A critical section is a part of a program where a shared resource (like a
variable, file, or memory location) is accessed and modified.
o Since multiple processes or threads may try to access the shared resource
simultaneously, the critical section must be executed by only one process at
a time to prevent data inconsistency or race conditions.
o It contains code that accesses shared resources.
o Mutual exclusion is required to ensure that only one process is executing in
its critical section at any time.
o The OS or programmer uses synchronization techniques (like semaphores,
mutexes, monitors) to manage critical sections.
o Problems:
 Synchronization
 Increased overhead
 Potential for deadlocks
 Limitations on parallelism
 Race condition
o Solution
 Mutual Exclusion – Only one process in the critical section at a time.
 Progress – If no process is in the critical section, others can proceed.
 Bounded Waiting – A process will eventually get a turn.

8. a) What is Internal fragmentation? [DEC 22]


b) Explain memory fragmentation. [DEC 23/DEC 24]
c) Write short note on: Memory fragmentation [DEC 24]
- Fragmentation can happen when a file is too large to fit a single contiguous block
of free space on storage medium, or when blocks of free space on medium are
insufficient to hold the file. As system must search for & retrieve individual
fragments from different locations in order to open file, fragmentation can cause
problems when reading or accessing file.
- Types: Internal & External
- Internal fragmentation occurs when there is unused space within a memory block.
When system employs a fixed-size block allocation method internal fragmentation
occurs.
- External fragmentation occurs when a storage medium, has many small blocks of
free space scattered throughout it. This can happen when a system crashes &
deletes files frequently leaving many small blocks of free space on medium.
When system needs to store a new file, it may be unable to find single contiguous
block of free space large enough to store file & must store file in multiple smaller
blocks instead.
- Memory fragmentation can occur at memory management level where system
allocates & deallocates memory blocks dynamically.
8
9. a) What is a thread? How multithreading is beneficial? Compare and
contrast different multithreading models. [DEC 22/MAY 24]
b) What is Threading and Multithreading? Explain importance of
Multithreading. [MAY 23]
- The unit of dispatching is referred to as a thread or lightweight process. The unit
of resource ownership is referred to as a process or task.
- Ability of an OS to support multiple, concurrent paths of execution within a single
process.
- Benefits of multithreading:
o Responsiveness – in interactive application, it may allow a program to
continue running even if a part of it is blocked or is performing lengthy
operation thereby increasing responsiveness to user.
o Resource sharing – threads share memory & resources of process to which
they belong by default. Benefit of sharing code & data is that it allows
application to have several threads of activity within same address space
(message passing, shared memory)
o Economy – threads share memory with process it belongs, so it is more
economical to create & context switch threads.
o Scalability – benefits of multi-programming increase in case of
multiprocessor architecture where threads may be running parallel on
multiprocessors.
o Minimized system resource usage – threads have minimal influence on
system’s resources. Overhead of creating, maintaining, & managing
threads is lower than a general process.
o Enhanced concurrency – enhances concurrency of multi-CPU machine as
every thread executes on parallel processor.
o Reduced Context Switching Time – threads minimize context switching time
& virtual memory space remains same.
- Multithreading models: many to many, many to one & one to one
- Many to many model: multiple user threads multiplex to same or lesser number of
kernel level threads. Number of kernel level threads are specific to machine,
advantage of this model is if a user thread is blocked we can schedule other user
threads to other kernel thread. System doesn’t block if a particular thread is
blocked. Best threading model.

9
- Many to one model: multiple user threads mapped to one kernel thread. When a
user thread makes a blocking system call, entire process blocks. As we have only
one kernel thread & only one user thread can access kernel at a time, so multiple
threads are not able to access multiprocessor at same time. Thread management
is done on user level so it is more efficient.

- One to one model: one to one relationship between kernel & user thread. Here
multiple thread can run on multiple processor. Problem with this model is that
creating a user thread requires corresponding kernel thread. As each user thread
is connected to different kernel, if any user thread makes a blocking system call,
other user threads won’t be blocked.

10. Explain paging in detail. Describe how logical address is converted into
physical address. [DEC 22/MAY 24]
- Partition memory into small equal fixed-size chunks & divide each process into
same size chunks. These chunks of a process are called pages & these chunks
of memory are called frames. OS maintains a page table for each process. Page
table contains frame location for each page in process, memory address consist
of a page number & offset within the page.
- Process of retrieving processes in form of pages from secondary storage into
main memory is known as paging. Basic purpose of paging is to separate each
procedure into pages.
- Mapping between logical pages & physical page frames is maintained by page
table which is used by memory management unit to translate logical addresses
into physical addresses. Page table maps each logical page number to a physical
page frame number.
- Logical address to physical address
10
11. a) What is semaphore and its types? How the classic synchronization
problem – Dining philosopher is solved using semaphores? [DEC 22/MAY
24]
b) What is semaphore? What is its significance? [MAY 23]
c) Write short note on: Semaphores [DEC 23]
- Semaphore is an integer value used for signalling among processes, it is a
synchronization tool that does not require busy waiting, it can only be accessed
via two indivisible (atomic) operations, 3 operations can be performed on
semaphores all are atomic – initialize, increment(signal) & decrement(wait). A
queue is used to hold processes waiting on semaphore, strong semaphore use
FIFO, weak semaphores don’t specify order of removal from queue.
- Types of semaphore – Counting semaphore (integer value can range over
unrestricted domain) & Binary semaphore (integer value can range only between
0 & 1, it can be simpler to implement)
wait(s) signal(s)

while s <= 0 do (keep testing); s = s + 1;


s = s - 1;
Modifications to integer value of semaphore in wait & signal operations are
executed indivisibly, when one process modifies semaphore then no other
process can simultaneously modify same semaphore value.
- Binary semaphores are also known as mutex locks as locks that provide mutual
exclusion, they can be used to deal with critical section problem for multiple
processes.
- Counting semaphores can be used to control access to given resource consisting
of finite no. of instances. Each process that wishes to use a resource performs
wait() operation on semaphore, when a process releases a resource it performs a
signal() operation, when count for semaphore goes to 0 all resources are being
used after that processes that wish to use a resource will block until count
becomes greater than 0.
- 5 philosophers sit on circular table. Initially all philosophers are in thinking phase
& while thinking they make sure they do not interact with each other. When a
philosopher feels hungry he attempts to pick up 2 chopsticks, if philosophers on
left & right are not eating he gets 2 chopsticks, with 2 chopsticks in hand he starts
eating, after he finishes eating chopsticks are positioned back on table &
philosopher begins to think again. If philosopher on left or right is already eating
then it fails to grab 2 chopsticks at same time & has to wait.
- Solution is to represent each chopstick as a semaphore & philosophers must
grab & release chopsticks by executing wait operation or signal operation on
appropriate semaphore. We use array ‘chopstick’ of size 5 where each element is
initialized to 1.
- Possible remedies:
o Allow at most 4 philosophers to sit simultaneously at table
o Allow a philosopher to pick chopstick only if both chopsticks are available
11
o Use asymmetric solution i.e. an odd philosopher picks up 1st his left
chopstick & then his right chopstick while even philosopher picks right
chopstick & then left.

12. a) Explain RAID level in details [DEC 22/MAY 23/DEC 23]


b) What is redundant array storage? Explain RAID levels [DEC 24]
- RAID – Redundant Array of Independent Disks, it is set of physical disk drives
viewed by OS as single logical drive, data are distributed across physical drives
of an array, redundant disk capacity is used to store parity info which provides
recoverability from disk failure.
- RAID 0 (Stripping) – improves system performance by splitting data into smaller
“blocks” & spreading them across multiple disks this process is called stripping. It
enhances data access speed by enabling parallel R/W operations but provides no
redundancy or fault tolerance.

- RAID 1 (Mirroring) – enhances reliability by creating an identical copy (mirror) of


each data block on separate disks. Ensures that even if one disk fails the data
remains accessible from its duplicate. Although reliable but requires significant
storage overhead.

- RAID 2 (Bit-level stripping with hamming code) – specialized RAID level that uses
bit-level stripping combined with error correction using Hamming Code. Data is
distributed at bit level across multiple drives & dedicated parity drive is used for
error detection & correction.

12
- RAID 3 (Byte-level stripping/block-level parity) – enhances fault tolerance by
employing byte-level stripping across multiple drives & storing parity info on
dedicated parity drive. Dedicated parity drive allows for reconstruction of lost data
if single drive fails.

- RAID 4 (Block-level parity/stripping) – introduces block-level stripping across


multiple disks combined with dedicated parity disk to provide fault tolerance. Data
is written in blocks & separate disk stores parity info calculated using XOR
function.

- RAID 5 (Block-level distributed parity) – builds on RAID 4 by distributing parity


info across all disks instead of storing it on dedicated parity drive.

- RAID 6 (Block-level stripping with two parity bits/dual redundancy) – advanced


version of RAID 5 that provides enhanced fault tolerance by introducing double
distributed parity. This allows RAID 6 to recover from failure of up to two disks
simultaneously making it more reliable for systems with larger arrays.

13
13. What is open-source operating systems? What are the design issues of
Mobile operating system and Real time operating system? [DEC 22/MAY 24]
- An open source OS is a type of OS whose source code is made publicly available
for anyone to view, modify & distribute. These are often developed collaboratively
by a community of programmers & are usually free to use. (Ex: Linux, BSD, QNX)
- Linux is most famous open-source OS & it powers a large portion of servers,
smartphones & desktops. Popular distributions are ubuntu, Debian, fedora, arch
linux. Key features – customizability, wide hardware support, community-driven
development, & large software repository
- BSD (Berkeley Software Distribution) – they are derived from original Unix.
Popular distributions are FreeBSD, OpenBSD, NetBSD. Key features – Robust
networking support, secure by design, & strong unix heritage.
- QNX – commercial real-time OS (RTOS) but its core is available as open-source.
Used in embedded systems primarily. Key features – Real time performance,
modularity, & reliability in embedded applications.
- Design issues of Mobile OS:
o Resource constraints – limited CPU, memory & battery, need for power-
efficient resource management.
o User Interface Responsiveness – ensure smooth touch, gesture & display
performance
o Security & Privacy – app sandboxing, permission models.
o Application lifecycle management – managing app states (background,
foreground, suspended, terminated), handling resource allocation for
multitasking
o Network Variability – mobile N/W are less reliable & vary in speed
o Fragmentation, App Store Ecosystem Integration, Power management,
Updates & patching
- Design issues of RTOS:
o Deterministic Response (timing predictability) – tasks must be executed
within strict time constraints, missing deadline can cause system failure
o Resource constraint – runs on microcontrollers with very limited memory &
processing power.
o Scheduling – priority based preemptive
o Interrupt handling – fast & predictable interrupt service routines.
o Minimal latency – very low interrupt latency & context switch time

14
o Safety & reliability – critical for medical devices, automotive, industrial
control
o Concurrency & synchronization – multiple tasks/processes need reliable
IPC
o Static memory allocation – avoids dynamic memory allocation to prevent
fragmentation & non-determinism
o Minimal power consumption

14. What is the content of page table? Explain. [MAY 23]


- Page table is a data structure used by OS to keep track of mapping between
virtual addresses used by process & corresponding physical address in system’s
memory.
- A Page Table Entry (PTE) is an entry in Page Table that stores info about
particular page of memory. Each PTE contains info such as physical address of
page in memory, whether page is present in memory or not, whether it is writable
or not, & other access permissions.
- Information stored in page table:
o Frame number – frame number in which current page you are looking for is
present. Number of bits required depends on number of frames. Frame bit
is also known as address translation bit.
𝑆𝑖𝑧𝑒 𝑜𝑓 𝑝ℎ𝑦𝑠𝑖𝑐𝑎𝑙 𝑚𝑒𝑚𝑜𝑟𝑦
𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑏𝑖𝑡𝑠 𝑓𝑜𝑟 𝑓𝑟𝑎𝑚𝑒 =
𝐹𝑟𝑎𝑚𝑒 𝑆𝑖𝑧𝑒
o Present/Absent Bit – says whether a particular page is present or absent. If
not present then page fault. (Also known as valid/invalid bit)
o Protection bit – says what kind of protection you want on that page i.e. read,
write, etc.
o Referenced bit – says whether page has been referred to in last clock cycle
or not. When page is accessed it is set to 1.
o Modified bit – says whether page has been modified or not. (Sometimes
also called dirty bit)
15. Compare process scheduling and process switching. [MAY 23/DEC 23]

Process scheduling Process Switching


The actual act of saving state of currently
The decision-making activity of selecting
running process & restoring state of
which process runs next on CPU.
another
Happens when OS needs to select a
Happens after scheduling, during a
new process to run – either because
context switch to replace current process
current one has finished, is waiting or
on CPU with another
higher-priority process arrived
Saving CPU register values, program
Evaluating processes based on
counter, stack pointers of current process
scheduling algorithms (like round robin)
& loading those of new process

15
Key component is dispatcher (part of OS
Key component is scheduler (part of OS
that performs context switch after
that makes scheduling decisions)
scheduling)
Affects overall system responsiveness & Context switching introduces overhead
efficiency by choosing optimal processes because it consumes CPU cycles without
to run doing useful work
Objective is to implement decision made
Objective is to ensure fair, efficient &
by scheduler by actually switching
policy-driven CPU time distribution
between processes

16. Explain UNIX OS kernel. [MAY 23]


- UNIX kernel is core part of UNIX OS, it acts as intermediatory between hardware
& software applications, providing essential services like process management,
memory management, file systems & device control.
- UNIX kernel uses monolithic architecture, all kernel services run in kernel space,
direct communication & function calls between subsystems, faster than
microkernel designs but less modular.
- Components of UNIX kernel – process management, memory management, file
system management, device management, inter-process communication, system
call interface
- Process management – creation, scheduling & termination of process. Manages
process priorities & states. Provides system calls like fork(), exec(), wait(), kill()
- Memory management – allocates & deallocates memory for processes. Supports
virtual memory, swapping & paging. Maintains data structures like page tables &
memory maps
- File system management – provides hierarchical file structure. Manages file
creation, deletion, reading, writing. Controls access permissions for files &
directories.
- Device management – uses device drivers to manage hardware like disks,
terminals & printers. Provides a uniform interface to access different hardware
devices
- Inter-Process Communication (IPC) – allows processes to communicate &
synchronize. Provides mechanisms like pipes, message queues, shared memory
& semaphores.
- System call interface – acts as controlled gate for user applications to request
kernel services (ex: open(), read(), write(), close(),etc.)

17. Explain Direct Memory Access (DMA) in detail. [MAY 23]


- DMA (Direct Memory Access) is a technique that allows certain hardware
subsystems (like disk drives or network cards) to transfer data directly to or from
memory without involving the CPU for each byte or word.
- Why DMA is Needed:
o In normal I/O operations:
16
o The CPU controls data transfer between I/O devices and memory.
o This causes the CPU to be busy during the entire transfer, reducing
efficiency.
o With DMA:
o The DMA controller (DMAC) handles the transfer.
o The CPU is free to perform other tasks.
- Working:
i. CPU initiates the DMA transfer:
 It provides the DMA controller with:
 Source address
 Destination address
 Number of bytes to transfer
 Direction (read/write)
ii. DMA Controller takes control of the system bus:
 It requests access to the memory (bus arbitration).
iii. Data is transferred:
 Directly between the device and main memory.
iv. DMA Controller interrupts the CPU:
 Once the transfer is complete, it sends an interrupt to notify the CPU.
- Modes of DMA
o Burst mode – Transfers a block of data all at once; CPU waits until
complete.
o Cycle stealing – DMA takes control of the bus for one cycle at a time,
interleaving with CPU.
o Transparent mode - DMA only transfers data when the CPU is idle. Slowest,
but doesn’t interfere.
- Advantages:
o Reduces CPU load
o Enables faster data transfer
o Improves overall system performance
o Essential for high-speed I/O devices (like disk, graphics)

18. Explain with suitable example, how virtual address is converted to


physical address? [MAY 23]
- In a virtual memory system, each process uses virtual addresses that must be
translated into physical addresses in the main memory using the Memory
Management Unit (MMU).
- Key terms:
o Virtual Address (VA): Address used by a program.
o Physical Address (PA): Actual address in RAM.
o Page: Fixed-size block of virtual memory.
o Frame: Fixed-size block of physical memory.
o Page Table: Data structure used to map pages to frames.
- General formula:
o Virtual Address (VA) = Page Number (p) + Page Offset (d)
17
o Physical Address (PA) = Frame Number (f) + Offset (d)
o Physical Address = (Frame Number × Frame Size) + Offset
- Example:
o Virtual address size = 16 bits ⇒ Virtual address space = 64 KB
o Page size = 4 KB ⇒ Offset size = 12 bits (since 2¹² = 4096 = 4 KB)
o Therefore, Page Number = 4 bits (16 - 12 = 4)
o Physical memory = 32 KB ⇒ 8 frames of 4 KB each
o Suppose: virtual address = 0x1A3F, in binary = 0001 1010 0011 1111
o Break into: page nu. = 0001 and offset = 1010 0011 1111 = 0xA3F
o Page table:
Page number Frame number
0 5
1 3
2 6
3 0
o From the table: Page 1 → Frame 3
o Calculate physical address:
Frame Number = 3
Frame size = 4 KB = 4096
Offset = 0xA3F = 2623 (decimal)
So, physical address = (3 × 4096) + 2623 = 12288 + 2623
=14,911 = (decimal to hex) 0x3A3F
o Final answer: virtual address = 0x1A3F and physical address = 0x3A3F

19. a) What is virtual memory technique? Discuss segmentation with


example. [MAY 23]
b) Write short notes on: Segmentation [DEC 24]
- Refer Q5 for virtual memory
- Segmentation is a memory management technique where a
process is divided into variable-sized segments based on
logical divisions such as code, data, and stack.
- Each segment has a segment number and an offset.
- The operating system maintains a segment table that stores
the base address and limit (length) of each segment.
- Segmentation is visible to the programmer and matches the
program's logical structure.
- Key concept:
o A segment is a logical unit of the program.
o A logical address in segmentation has two parts:
Logical Address = <Segment Number, Offset>
o A Segment Table maps each segment to:
 Its base address (starting physical address)
 Its limit (length of the segment)
- Address Translation in Segmentation:
18
o To translate <Segment Number, Offset> into a physical address:
a. Look up the segment number in the segment table.
b. Get the base address and limit.
c. Check if offset < limit:
 If yes: Physical Address = Base + Offset
 If no: throw segmentation fault
- Example:
Assume:
o Segment 2 has base = 1000, limit = 400
o Logical address = <2, 50>
o Offset 50 < limit 400, Physical Address = 1000 + 50 = 1050
- Advantages:
o Reflects logical program structure
o Enables protection and sharing of specific segments
o Allows growing segments independently (e.g., stack, heap)

20. State features of Cloud OS. Enlist its advantages and disadvantages.
[MAY 23]
- A cloud OS is an OS designed to manage & run applications, services &
resources within a cloud computing environment. A cloud OS is designed to
provide seamless access to cloud resources, allowing users to interact with
virtualized hardware, storage, networking & computing resources over the
internet. [Ex: Google Cloud Platform (GCP), Amazon Web Services (AWS), Cloud
Foundry, OpenStack, IBM Cloud]
- Features:
o Virtualization Support – Manages virtual machines (VMs) & abstracts
hardware resources.
o Scalability – Dynamically allocates resources (CPU, memory, storage)
based on demand.
o Multi-Tenancy – Supports multiple users securely on a shared
infrastructure.
o Resource Management – Efficiently manages computing resources,
ensuring optimal performance.
o Web-Based Interface – Accessible through a web browser; no need for
installation on client side.
o Security & Access Control – Includes identity management, role-based
access, & data encryption.
o API integration – Offers APIs for developers to build & integrate cloud-
based applications.
o Monitoring & Analytics – Tracks performance metrics, logs, & system health
in real-time.
o Storage Services - Provides access to distributed file systems or object
storage.
o Automatic Updates – Software & system updates are handled centrally &
automatically.
19
- Advantages:
o Cost Efficiency – Reduces hardware & maintenance costs; pay-as-you-go
pricing models
o Remote Accessibility – Accessible from anywhere with internet connectivity.
o High Availability – Built-in redundancy & failover mechanisms ensure
uptime.
o Fast Deployment – Rapid provisioning of servers, applications & services.
o Centralized Management – Simplifies administrative tasks across multiple
machines & users.
o Environmentally Friendly – Optimized use of shared infrastructure reduces
energy usage.
o Improved collaboration – Multiple users can access & work on shared
resources in real-time.
- Disadvantages:
o Dependence on Internet – Requires stable & fast internet access for optimal
performance.
o Data Security & Privacy Risks – Potential exposure of sensitive data if
security is compromised.
o Limited Control – Less control over infrastructure compared to on-premise
systems.
o Downtime & Outages – Cloud provider issues can lead to service
interruptions.
o Compatibility issues – Some legacy applications may not be supported.
o Vendor Lock-In – Switching providers can be difficult & costly due to
proprietary technologies.
o Ongoing Costs – Long-term operational costs may accumulate, especially
with increasing usage.

21. a) What is demand paging? Discuss the hardware support required to


support demand paging. [MAY 23]
b) What is demand paging? What are its advantages? [MAY 24]
- It is a technique used in virtual memory systems where pages enter main
memory only when requested or needed by CPU. The OS loads only necessary
pages of a program into memory at runtime instead of loading the entire program
into memory at the start.
- Pure demand paging is a specific implementation of demand paging. OS only
loads pages into memory when program needs them.
- Working process of Demand Paging:
1. Program Execution – Upon launching program, OS allocates certain amount
of memory to program & establishes a process for it.
2. Creating Page Tables – To keep track of which program pages are currently in
memory & which are on disk, OS makes page tables for each process.

20
3. Handling Page Fault – When a program tries to access a page that isn’t in
memory at the moment, a page fault happens. In order to determine whether
necessary page is on disk, OS pauses application & consults page tables.
4. Page fetch – OS loads necessary page into memory by retrieving it from disk
if it is there. New location of page is reflected in page table.
5. Resuming the program – OS picks up where it left off when necessary pages
are loaded into memory.
6. Page Replacement – If there is not enough free memory to hold all pages a
program needs, OS may need to replace one or more pages currently in
memory with pages currently in memory on disk. Page replacement algorithm
used by OS determines which pages are selected for replacement.
7. Page Cleanup – When a process terminates, OS frees memory allocated to
process & cleans up corresponding entries in page tables.

- Demand paging can improve system performance by reducing memory needed


for programs & allowing multiple programs to run simultaneously. If not
implemented properly, it can cause performance issues i.e. when a program
needs a part that isn’t in main memory, OS must fetch it from hard disk which
takes time & pauses program, this can cause delays & if system runs out of
memory it will need frequently swap pages in & out, increasing delays & reducing
performance.
- Advantages:
o Efficient use of physical memory – Query paging allows for more efficient
use because only necessary pages are loaded into memory at any given
time.
o Support for larger programs – Programs can be larger than physical
memory available on system because only necessary pages will be loaded
into memory.

21
o Faster program start – As only part of program is initially loaded into
memory, programs can start faster than if entire program were loaded at
once.
o Reduce memory usage – Query paging can help reduce amount of memory
a program needs which can improve system performance by reducing
amount of disk I/O required.
- Disadvantages:
o Page Fault Overload – process of swapping pages between memory & disk
can cause performance overhead, especially if program frequently
accesses pages that are not currently in memory.
o Degraded Performance – If a program frequently accesses pages that are
not currently in memory, system spends a lot of time swapping out pages,
which degrades performance.
o Fragmentation – Query paging can cause physical memory fragmentation,
degrading system performance over time.
o Complexity – Implementing query paging in an OS can be complex,
requiring complex algorithms & data structures to manage page tables &
swap space.
- Hardware support required:
o Memory Management Unit (MMU) – translates virtual addresses to physical
addresses using page table, checks valid (proceed with address
translation)/invalid bit (page fault occurs)
o Page Fault Trap Mechanism – hardware must be capable of generating a
trap (interrupt) when page fault occurs. This trap transfers control to OS
which then locates page on disk, loads it into free frame in memory,
updates page table & restarts instruction that caused fault.
- Secondary Storage Interface, Buffer, etc.

22. Write short note on: Disk Scheduling [MAY 23]


- SSTF, SCAN, FCFS (with graphs)
- Refer assignment 2 (it’s numerical convert that to theory in your own words)

23. Write short note on: Real Time Operating System [MAY 23/DEC 23/MAY
24]
- RTOS is an OS specifically designed to meet requirements of real-time
applications, where correctness of an operation depends not only on its logical
correctness but also on time it was executed. An RTOS ensures that critical tasks
are completed within defined time constraints known as deadlines.
- Types of RTOS: Hard RTOS, Soft RTOS, Firm RTOS
- Hard Real-Time Systems: missing a deadline is unacceptable & can lead to
catastrophic failures. (Ex: automotive airbag systems, medical infusion pumps)
- Soft Real-Time Systems: meeting deadlines is important but not critical. Missing a
deadline may result in degraded performance or reduced quality of service but
22
system will continue to function (Ex: video streaming applications, real-time
gaming)
- Firm Real-Time Systems: similar to soft RTOS, but missing a deadline often
results in a significant degradation in system’s performance or quality. It is not
catastrophic but system should try to meet deadlines as much as possible. (Ex:
Online transaction processing systems, stock trading systems)
- Uses of RTOS – defense systems like RADAR, air traffic control system, medical
devices like pacemakers, stock trading applications.
- Advantages – maximum consumption, task shifting, focus on application, error
free, memory allocation
- Disadvantages – limited tasks, use heavy system resources, complex algorithms,
thread priority, minimum switching

24. Write short note on: Deadlock avoidance [MAY 23/DEC 24]
- write about deadlock & conditions….
- A decision is made dynamically whether the current resource allocation request
will (if granted) potentially lead to a deadlock. This requires knowledge of future
process requests. There are 2 approaches to avoid deadlock.
- 1 – do not start a process if its demands might lead to deadlock & 2 – do not
grant an incremental resource request to a process if this allocation might lead to
deadlock
- Resource allocation denial refers to banker’s algorithm, the state of system is the
current allocation of resource to process, safe state is a state where there is at
least on sequence that does not result in deadlock whereas unsafe state is a
state that is not safe.
- Write about banker’s algorithm, safe sequence, allocation matrix, need matrix,
claim matrix, available resource, etc.

25. a) Write short note on: Process Control Block [MAY 23]
b) What is a process? Explain Process Control Block in detail. [DEC 23/DEC
24]
- Process is a program in execution, it is an instance of program running on a
computer, it is an entity that can be assigned to & executed on a processor, it is a
unit of activity characterized by execution of a sequence of instructions (a current
state & an associated set of system instructions)
- Process consists of – program code (an executable program), set of data
(associated data needed by program), no. of attributes describing state of
process & execution context of program (all info OS needs to manage process)
- PCB contains process elements, it is created & managed by OS, it allows support
for multiple processes

23
- Components of PCB:
o Identifier – unique ID to distinguish from other process
o State – if process is executing currently then it is in running state
o Priority – priority level relative to other processes
o PC (Program Counter) – address of next instruction in program to be
executed
o Memory pointers – includes pointers to program code & data associated &
also any memory blocks shared with other processes
o Context data – these are data that are present in register in processor while
process is executing
o I/O status info – includes outstanding I/O requests, I/O devices assigned to
this process, a list of files in use by process & so on….
o Accounting info – may include amount of processor time & clock time used,
time limits,…..
- Role of PCB:
o It is most import data structure in OS (defines state of OS)
o It requires protection: 1 – faulty routine can cause damage to block
destroying OS’s ability to manage process & 2 – any design change to
block can affect many modules of OS

26. Explain process state model. [DEC 23/DEC 24]


- Two-state Process Model:

24
- Five-state Process Model:

- One Suspend State Process Model:

- Two Suspend State Process Model:

25
27. Explain about IPC. [DEC 23/DEC 24]
- In order to cooperate concurrently executing processes must communicate &
synchronize, IPC is based on use of shared variables (variables referenced by
more than one process) or message passing.

- Synchronization is necessary when processes communicate. To communicate


one process must perform some action such as setting value of variable or
sending a message that other detects. This works only if events perform an action
or detect an action (constrained to happen in that order)
- Synchronization is set of constraints on ordering of events (To satisfy such
constraints, execution of processes are delayed)
- Role of IPC – prevent race condition, ensuring mutual exclusion, coordinating
process execution, preventing deadlocks, communication between processes &
fairness.

26
28. What are different types of process scheduling algorithms? Explain
anyone scheduling algorithm with example. [DEC 23]
- Scheduling is the activity of process manager that handles removal of running
process from CPU & selection of another process on basis of a particular
strategy.
- FCFS (First Come First Serve):
o It is non pre-emptive
o Pro – simple to implement & understand
o Con – poor average waiting time (not very good for time-sharing systems)
o Convoy effect – short process behind long process, one CPU-bound
process with numerous I/O-bound processes, resource utilization of I/O
device degrades.
- Shortest Job First (SJF):
o Length of next CPU burst is associated with each process. Schedules the
process with shortest burst. Use FCFS sub-scheduling to break ties.
o Two Schemes – Non pre-emptive & Pre-emptive
o Non pre-emptive – cannot pre-empt before process completes its CPU
burst
o Pre-emptive – if a new process arrives with CPU burst length less than
remaining time of current executing process, then pre-empt (known as
shortest remaining time first – SRTF)
o SJF is optimal, minimizes average waiting time for any given set of
processes.
- Priority scheduling:
o A priority number (integer) is associated with each process within specified
range (0 to 255)
o CPU is allocated to process with highest priority (smallest integer = highest
priority) [can be pre-emptive or non pre-emptive]
o SJF is like priority scheduling, priority is predicted next CPU burst time.
o Problem – starvation (low priority process may never execute)
o Solution – aging (as time progresses increase priority of process)
- Round Robin (RR):
o Each process gets a small unit of CPU time (time quantum). After this time
has elapsed, process is pre-empted & added to end of ready queue.
o If ‘n’ processes in ready queue & time quantum is ‘q’ then each process
gets of CPU time in chunks of at most ‘q’ time units at once. No process
waits more than (𝑛 − 1)𝑞 time units.
o Time quantum 𝑞 must be large with respect to context switch, otherwise
overhead is too high
o Performance: when q large  FIFO, when q small  processor sharing

27
29. a) Give detail comparison of user level and kernel level threads. [DEC 23]
b) What is thread in OS? Compare user level and kernel level threads. [DEC
24]
- Refer to thread answer….
User level threads Kernel level threads
Implemented by user-level libraries Implemented by OS
OS doesn’t recognize user-level
Recognized by OS
threads directly
Implementation is easy Implementation is complicated
Context switch time is less Context switch time is more
No hardware support is required Hardware support is required
If one user-level thread performs If one kernel thread performs blocking
blocking operation then entire process operation then another thread can
will be blocked continue execution
Multithreaded applications cannot take
Can be multithreaded
full advantage of multiprocessing
Can be created & managed more
Takes more time to create & manage
quickly
Any OS supports user-level threads Kernel-level threads are OS specific
Limited access to system resources,
Can access to system-level features
cannot directly perform I/O operations
More portable than kernel-level
Less portable due to dependence on OS
threads

30. What is an Operating System? Explain structure of Operating System.


[DEC 23/DEC 24]
- OS is a system software, it acts as an interface between user & hardware, it
manages operates & communicates with computer hardware & software, it
controls execution of application programs, it is a resource manager, its main job
is to provide resources & services to user program.
- OS objectives – convenience, efficiency, performance, ability to evolve
- OS structure – monolithic, microkernel, layered, virtual machines
- Monolithic kernel: Single large process running entirely in single address space,
all services exist & execute in kernel space, entire OS is placed inside kernel,
easy to implement/code, kernel can invoke functions directly hence performance
is high, less secure because if one service fails entire system crashes.
- Microkernel: every component has its own space, interactions between
components strictly through well defined interfaces, kernel has basic interprocess
communication & scheduling, only bare minimum code is placed inside kernel,
services are separated so they have different address spaces, kernel is broken

28
down into processes called as servers, tough to implement/code, performance is
low, more secure because if one service crashes others still function properly.

Monolithic Microkernel
User services & kernel services are User services & kernel services are
kept in same address space kept in separate address space
Larger than microkernel Smaller in size
Fast execution Slow execution
Hard to extend Easily extendible
If a service crashes other services
If a service crashes, whole system
function independently but effects on
crashes.
microkernel
Less code required to write More code is required to write
Ex: Linux, BSDs Ex: QNX, Symbian

31. Explain objectives and characteristics of modern operating system.


Explain Network OS. [DEC 23/DEC 24]
- Refer OS objectives & features
- Network OS is an OS designed to manage N/W resources & facilitate
communication between different devices in a computer N/W.
- N/W OS is responsible for managing interactions between devices in a N/W,
ensuring resources like data, printers & file systems are shared & accessible
across N/W.
- Types of N/W OS: Peer-to-Peer N/W OS, Client-Server N/W OS, Hybrid N/W OS
- Peer-to-Peer N/W OS – each device acts as both a client & a server & resources
are shared equally among all devices. There is no centralizes control over the
N/W & each device has ability to access or provide N/W resources. (Ex: Microsoft
Windows Home Edition)
- Client-Server N/W OS – a centralized server provides resources & services such
as file sharing, authentication & applications while client devices access these
resources. This model is commonly used in enterprise environments. (Ex:
Windows Server, UNIX-based servers)
- Hybrid N/W OS – they combine aspects of both peer-to-peer & client-server
models. Some resources might be shared directly between peers while more
critical resources are provided by a centralized server. (Ex: Windows Server-
based networks that also allow for peer-to-peer sharing in small, ad-hoc
configurations.)
- Examples of N/W OS – Windows Server, Linux/Unix, Novell NetWare, macOS
Server, Cisco Internetwork OS (IOS), Solaris
- Applications of N/W OS – corporate N/Ws, Internet Service Providers (ISPs),
Educational Institutions, Small Office/Home Office (SOHO)

29
32. a) List page replacement algorithms? Explain anyone page replacement
algorithms with example. [DEC 23]
b) What is page replacement? Explain anyone page replacement algorithm
with example. [DEC 24]
- Page replacement algorithms are used in virtual memory systems to decide
which memory page to remove when a new page needs to be loaded and
memory is full.
- Common Page Replacement Algorithms: (These are just overview write
something in your own words or include diagrams/example)
i. FIFO (First-In, First-Out)
 Replaces the oldest loaded page.
 Simple to implement using a queue.
 Can lead to Belady’s Anomaly (more frames → more page faults).
ii. LRU (Least Recently Used)
 Replaces the page that was least recently used.
 Based on the past usage pattern.
 Gives good performance, but costly to implement (requires tracking).
iii. Optimal Page Replacement (OPT)
 Replaces the page that will not be used for the longest time in future.
 Gives the lowest possible page fault rate.
 Not implementable in practice (requires future knowledge).
iv. NRU (Not Recently Used)
 Uses reference (R) and modify (M) bits.
 Classifies pages into 4 categories; removes from the lowest class.
 Good balance of performance and overhead.
v. LFU (Least Frequently Used)
 Replaces the page with lowest access count.
 Assumes pages used less often are less important.
 Can be inefficient if old but heavily used pages stay long.

33. Write short note on: Deadlock recovery [DEC 23]


- Deadlock recovery is the process of restoring a system to normal operation after
a deadlock has been detected. It involves breaking the circular wait condition
among processes to free up resources.
- Techniques for Deadlock Recovery:
i. Process Termination
 Abort all deadlocked processes.
 Or abort one process at a time until the deadlock breaks.
 Criteria: priority, resource usage, execution time, etc.
ii. Resource Preemption
 Temporarily take resources away from one process and give them to
others.
 Risk of starvation, so a rollback mechanism may be needed.
30
iii. Rollback
 Roll back one or more processes to a safe state using saved
checkpoints.
 Re-execute from that point to avoid deadlock.
- Challenges:
o Deciding which process to terminate or which resource to pre-empt.
o Maintaining system consistency and stability.
o Preventing starvation and repeated deadlocks.
34. Write short note on: Android [DEC 23]
- Android OS is widely used mobile OS developed by Google primarily designed
for touchscreen mobile devices like smartphones, tablets, smartwatches & other
wearable devices. It is based on Linux kernel & offers rich application framework
for developers along with an open-source structure that enables vast array of
manufacturers & developers to customize & create apps for platform.
- Key features/Characteristics – Open source, touchscreen interface,
customizability, multi-tasking, app ecosystem (Google Play Store), Google
services integration, hardware compatibility, security, OTA (Over-the-air) updates
- Key components of Android OS:
o Linux Kernel – Android’s foundation is Linux kernel which provides low-level
system functionality such as process management, memory management,
hardware drivers & networking. It ensures that device’s hardware resources
are managed efficiently & securely.
o Libraries – it includes a set of libraries that provide core functionality such
as graphics, database management & web browsing. These libraries are
built on top of Linux kernel & are essential for running Android apps. Key
libraries include WebKit (for web browsing), SQLite (for databases),
OpenGL ES (for 2D & 3D graphics) & SSL (for secure communications)
o Android Runtime (ART) – the environment in which Android apps run. It
executes app by interpreting bytecode into native machine code allowing
them to interact with hardware & system resources.
o Application framework – provides rich application framework offering
developers access to various tools & APIs to interact with device hardware,
manage resources & display user interfaces.

35. Explain Race condition with example. [MAY 24]


- A race condition occurs when:
o Multiple processes or threads read & write data items
o They do so in a way where final result depends on order of execution of
processes
- Output depends on who finishes the race last.
- Race condition is the situation where several processes access & manipulate
shared data concurrently. Final value of shared data depends upon which
process finishes last.
- To prevent race conditions, concurrent processes must be synchronized.
31
- Process working together share common storage that one can read or write.
Shared storage may be in main memory or it may be a shared file. When a user
wants to read from a file it must tell file process what it wants then file process
has to inform disk process to read required block.
- Situations where two or more processes are reading or writing some shared data
& final result depends on who runs precisely when are called race conditions.
- Include example…

36. What are features of Mobile and Real Time Operating Systems? [MAY 24]
- Features of Mobile OS:
o Rich GUI – touchscreen support, gestures, animations, multi-windowing
o Multitasking – runs multiple apps simultaneously, background & foreground
app management
o App store ecosystem – centralized platform for downloading, installing, &
updating applications securely
o Network connectivity – seamless switching between Wi-Fi, 4G/5G,
Bluetooth, & NFC
o Power management – Aggressive background task control to optimize
battery life, app standby & sleep modes.
o Security & permissions – app sandboxing, runtime permissions for
accessing locations contacts camera etc.
o Multimedia capabilities – audio, video playback, camera integration, AR
(augmented reality), VR (Virtual Reality) support on modern devices
o Sensor support – accelerometers, gyroscopes, GPS, proximity sensors,
biometric scanners
o Cloud integration – sync with cloud services for contacts media files &
backups
o Frequent updates & patches – system & security updates delivered via
internet
- Features of RTOS (refer RTOS design issues), deterministic response time,
priority based scheduling, minimal latency, resource efficiency, static memory
allocation, task synchronization & IPC, modularity & portability, device driver
support, minimal power consumption, high reliability & safety

37. What is difference between physical address and virtual address? [DEC
24]
Physical Address Virtual address
It is a location in a memory unit It is generated by CPU
It is a set of all physical addresses It is set of all logical addresses
mapped to corresponding logical generated by CPU in reference to a
addresses program
User can never view physical address User can view logical address of a
of program program
32
Computed by MMU (memory Computed by CPU
management unit)
User can indirectly access physical User can use logical address to
address but not directly access physical address
Physical address will not change i.e. Logical address can change
not editable
It is also called as real address It is also called as logical address

38. Write short notes on: Memory Allocation [DEC 24]


- Memory allocation in an Operating System (OS) refers to the process of
assigning physical or virtual memory blocks to various processes and programs
so they can execute efficiently.
- It is a key function of the Memory Management Unit (MMU) within the OS.
- Types of Memory Allocation in OS:
i. Static Allocation
 Memory is assigned to processes before execution begins.
 Done at compile time.
 Fixed memory size → cannot grow during runtime.
 Example: OS allocates fixed stack space for kernel threads.
ii. Dynamic Allocation
 Memory is allocated at runtime as needed.
 Allows efficient use of memory and better multiprogramming.
 Used in heap management via system calls like malloc() or OS-level
allocators.
- Memory Allocation Methods in OS:
i. Contiguous Allocation
 Each process is placed in a single contiguous block of memory.
 Simple and fast, but can lead to external fragmentation.
ii. Paging
 Memory is divided into fixed-size pages (virtual) and frames (physical).
 Eliminates external fragmentation.
 OS maintains a page table to map pages to frames.
iii. Segmentation
 Divides memory based on logical segments like code, data, and stack.
 More flexible than paging, supports protection and sharing.
iv. Pages segmentation / Segmented paging
 Combines paging and segmentation for better flexibility and
management.

33
39. Write short notes on: Cache Memory [DEC 24]
40. What is paging?
- Paging is a memory management scheme that eliminates external
fragmentation and minimizes internal fragmentation.
- It divides both logical memory (process) and physical memory (RAM)
into fixed-size blocks:
o Pages (in logical memory)
o Frames (in physical memory)
- When a process is loaded, its pages are placed into any available frames,
even if they are not conƟguous.
- This removes the need for allocaƟng a large conƟguous block, solving
the problem of external fragmentaƟon.
- Internal fragmentaƟon may sƟll occur in the last page of a process if it's
not completely filled, but this is limited and predictable.
- A page table is used to maintain the mapping between the process’s
pages and memory frames.
- Logical address is split into: → Page number (used to index into the page
table) → Offset (used within the frame)
- Example:
o If a process needs 18 KB and page/frame size is 4 KB:
→ It will require 5 pages (4 full pages and 1 parƟally filled)
o Only the last page will have 2 KB internal fragmentaƟon (if it uses
only 2 KB of the 4 KB frame)
- Advantages:
o No external fragmentation – Pages fit into any available frame.
o Efficient memory utilization – Makes use of all free frames.
o Simplifies memory allocation – Fixed-size pages and frames are
easier to manage.
o Supports virtual memory – Pages can be stored in secondary
memory until needed.
- Disadvantages:
o Minor internal fragmentation – Wasted space in the last page.
o Overhead of page table – Needs memory to store and manage
mappings.
o Slower address translation – Requires hardware support like TLB
to maintain speed.

34

You might also like