OS All Units 4,2 Marks Answers
OS All Units 4,2 Marks Answers
4 marks
1. Cite the objectives of the operating system?
The operating system acts as a bridge between the user of a computer system and the
computer hardware. All of the applications required for your programs to utilize the
computer hardware are located on top of the operating system.
The following are the main objectives of an operating system:
● Efficiency
● Hardware abstraction
● Convenience
● System resource management
An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
The operating system performs all the basic tasks like file management, memory management,
process management, handling input and output, and controlling peripheral devices such as disk
drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating System,
VMS, OS/400, AIX, z/OS, etc.
2. Discuss the Operating System viewed as a resource allocator & control program?
System ViewThe OS may also be viewed as just a resource allocator. A computer system
comprises various sources, such as hardware and software, which must be managed effectively.
The operating system manages the resources, decides between competing demands, controls the
program execution, etc. According to this point of view, the operating system's purpose is to
maximize performance. The operating system is responsible for managing hardware resources
and allocating them to programs and users to ensure maximum performance.
From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware interacts
with the operating system than with the user from a system viewpoint. The hardware and the
operating system interact for a variety of reasons, including:
1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM,
CPUs, I/O interaction, etc. These are all resources that the operating system needs when
an application program demands them. Only the operating system can allocate resources,
and it has used several tactics and strategies to maximize its processing and memory
space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on. These are very
important in the case of various user viewpoints because inefficient resource allocation
may affect the user viewpoint, causing the user system to lag or hang, reducing the user
experience.
2. Control Program
The control program controls how input and output devices (hardware) interact
with the operating system. The user may request an action that can only be done with I/O
devices; in this case, the operating system must also have proper communication, control,
detect, and handle such devices.
3. Discuss about bootstrap program
A bootstrap program is the first code that is executed when a computer system is
started. It is a small, but essential program that is responsible for loading the rest of the
operating system into memory. The bootstrap program is typically stored in a
non-volatile memory such as ROM or flash memory, so that it can be accessed even
when the computer is first turned on.
The bootstrap process is a chain of events, where each step loads and executes the
next program in the sequence. The first step is for the CPU to load the bootstrap
program from ROM into memory. The bootstrap program then performs a number of
tasks, including:
● Initializing the hardware: This includes setting up the memory map, configuring
the CPU registers, and enabling interrupts.
● Loading the operating system kernel: The bootstrap program locates the
operating system kernel on a storage device, such as a hard drive or SSD, and
loads it into memory.
● Transferring control to the kernel: Once the kernel is loaded, the bootstrap
program transfers control to it. The kernel then takes over the boot process and
completes the initialization of the operating system.
6. Discuss the major activities of an operating systems with regard to process management?
Process management is the responsibility of the operating system to manage all
running processes of the system1. The operating system performs various activities in
regard to process management, such as
● Creating and deleting processes
● Allocating and de-allocating the processor (CPU) to processes
● Scheduling processes and switching between them
● Suspending and resuming processes
● Managing deadlock and termination of processes
● Program execution: The operating system loads the program into the memory
and executes it. It also handles the scheduling, synchronization, and termination
of programs.
● Input/output operations: The operating system manages the communication
between the user and the input/output devices, such as keyboard, mouse, printer,
etc. It also provides access to these devices to the programs when needed.
● File system manipulation: The operating system helps the user to create, delete,
read, write, and organize files. It also manages the storage and allocation of files
on the disk.
● Communication: The operating system enables the communication between
processes, either on the same computer or on different computers connected by
a network. It also provides security and privacy for the data transfer.
● Error detection: The operating system detects and handles the errors that may
occur in the hardware, software, or user programs. It also provides mechanisms
for recovery and prevention of errors.
● Resource allocation: The operating system allocates the resources, such as CPU,
memory, disk, etc., to the programs and processes according to their needs and
priorities. It also ensures fair and efficient use of the resources.
● Protection: The operating system protects the system from unauthorized access
and malicious attacks. It also enforces the policies and rules for the access and
use of the system resources.
10. Discuss minimum of three major services of an Operating system with regard to
Process management and device management.
An operating system is a software that acts as an intermediary between the user and
computer hardware. It provides a platform for other application programs to work and
coordinates the use of the hardware and application programs for various users1.
Some of the major services of an operating system with regard to process management
and device management are:
Device management: The operating system manages the input-output operations and
establishes communication between the user and device drivers. Device drivers are
software that is associated with hardware that is being managed by the OS so that the
sync between the devices works properly1. The operating system also keeps track of
the status, allocation, and deallocation of all devices, such as mouse, keyboard, scanner,
printer, and pen drives4.
Memory management: The operating system is responsible for managing the main
memory and the secondary memory of the computer system. It allocates and
deallocates memory space to processes and ensures that each process gets enough
memory to execute. It also implements memory protection and memory sharing
mechanisms to prevent errors and improve efficiency
11. Discuss the three major categories of System Calls?
System calls are the interface between a process and the operating system. They allow
a user program to request a service from the kernel, such as file access, process
creation, or interprocess communication.
Process Control: These system calls deal with the creation, termination, and
management of processes. For example, fork() creates a new process, exec() runs
an executable file, and exit() terminates a process.
● File Management: These system calls are responsible for manipulating files and
directories. For example, open() opens a file, read() reads data from a file,
write() writes data to a file, and close() closes a file.
● Device Management: These system calls are responsible for controlling and
accessing devices, such as disks, keyboards, printers, etc. For example, ioctl()
performs device-specific operations, and read() and write() can also be used
for device input and output.
The difference between fork and exec is that fork starts a new process which is a copy of the one
that calls it, while exec replaces the current process image with another (different) one. Both
parent and child processes are executed simultaneously in case of fork, while control never
returns to the original program unless there is an exec error
13. Discuss Simple Operating structure
14. Distinguish simple and layered structures
2. Discuss about structure of a process in memory with a brief note on sections maintained.
Process in an Operating System
A process is actively running software or a computer code. Any procedure must be carried out in
a precise order. An entity that helps in describing the fundamental work unit that must be
implemented in any system is referred to as a process.
In other words, we create computer programs as text files that, when executed, create processes
that carry out all of the tasks listed in the program.
When a program is loaded into memory, it may be divided into the four components stack, heap,
text, and data to form a process. The simplified depiction of a process in the main memory is
shown in the diagram below.
Stack:The process stack stores temporary information such as method or function arguments, the
return address, and local variables.
Text:This consists of the information stored in the processor's registers as well as the most recent
activity indicated by the program counter's value.
Data:In this section, both global and static variables are discussed.
● The system maintains a queue of ready processes or tasks that are waiting for
CPU time.
● The system uses a CPU scheduling algorithm to select a process or task from
the queue and assign it to a processor for execution. The CPU scheduling
algorithm may consider factors such as priority, fairness, and resource utilization.
● The system uses a timer to interrupt the execution of the current process or task
after a fixed amount of time, called a time slice or quantum. This prevents any
process or task from monopolizing the CPU and allows the system to switch to
another process or task.
● The system saves the state of the interrupted process or task and places it back
in the queue, unless it has completed or requested I/O. The system then repeats
the previous steps for the next process or task in the queue.
● Switching context: The dispatcher module saves the state of the current process
and restores the state of the next process to run. This involves updating the
program counter, registers, and memory map of the processes.
● Switching to user mode: The dispatcher module changes the mode of the CPU
from kernel mode to user mode, which allows the process to access the
resources and instructions available to the user level.
● Jumping to the proper location in the user program to restart that program: The
dispatcher module sets the program counter of the CPU to the address of the
instruction that the process was executing before it was interrupted or
preempted. This allows the process to resume its execution from where it left off.
● Managing dispatch latency: The dispatcher module tries to minimize the amount
of time it takes to perform the above functions, which is known as the dispatch
latency. The dispatch latency affects the response time and throughput of the
system, so the dispatcher module should be as fast and efficient as possible.
11. Compare preemptive and non preemptive SJF cpu scheduling algorithms with an
example.
12. Discuss the process of CPU scheduling by using the Round-Robin algorithm.
CPU scheduling is the process of allocating CPU time to different processes or tasks
based on some criteria. The round-robin algorithm is one of the CPU scheduling
algorithms that assigns a fixed time slice or quantum to each process in a circular
order. The process that is currently running on the CPU will be preempted or interrupted
when its time slice expires, and the next process in the ready queue will be selected to
run. The preempted process will be added to the end of the ready queue and wait for its
next turn. This way, every process gets an equal share of the CPU time and no process
will starve.
The round-robin algorithm is simple, easy to implement, and suitable for time-sharing
systems. However, it also has some disadvantages, such as more overhead of context
switching, larger waiting time and response time, and low throughput. The performance
of the round-robin algorithm depends largely on the choice of the time quantum. If the
time quantum is too large, the algorithm will behave like the first-come-first-serve
algorithm, which is non-preemptive and may cause long waiting time for short
processes. If the time quantum is too small, the algorithm will cause frequent context
switches, which will increase the overhead and reduce the CPU utilization. Therefore,
choosing an optimal time quantum is important for the efficiency of the round-robin
algorithm.
14. Discuss the problem of starvation in priority cpu scheduling algorithm and explain a
simple solution to it.
Starvation is a problem that occurs in priority cpu scheduling algorithm when a low-priority
process is indefinitely blocked from accessing the cpu because of a continuous stream of
higher-priority processes. This can lead to poor performance and unfair treatment of the
low-priority process. A simple solution to this problem is to use aging, which is a technique of
gradually increasing the priority of processes that wait in the system for a long time. This way,
the low-priority process will eventually have a high enough priority to get the cpu and avoid
starvation. Aging can be implemented by adding a fixed value to the priority of each waiting
process at regular intervals. However, aging also has some limitations, such as increased
complexity, overhead, and unpredictable behavior. Therefore, the aging rate should be set
appropriately to balance the trade-off between fairness and efficiency.
15. Discuss Interprocess communication using Message passing technique.
16. Explore the differences in multilevel queue scheduling and Multilevel Feedback
scheduling algorithms
Unit-2
2marks
1.How a parent process would be aware of its child termination?Explain.
2.The following two processes P1 and P2 share a variable B with an initial value of 2.
On concurrent execution. Pl(){ B++; } P2(){ B--;}The number of distinct
values that B can possibly take after the execution is........
3.Discover the next process state when an interrupt occurs during a process is in its
running state and explain the scenario.
4.How a parent process would be aware od its child termination?Explain.
5.What is state save and state restore with respective to context switch.
6.Discover the next process state when an interrupt occurs during a processs is in its
running state and explain the scenario.
unit-3B
4 marks
1. Discuss about base and limit registers.
2. Summarize about base and limit registers.
● Base and limit registers are two types of registers that are used for memory
protection in operating systems. Memory protection is a mechanism that prevents
a process from accessing memory regions that are not allocated or authorized for
it. This helps to ensure the security and stability of the system.
● The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range. For example, if the base register holds
1000, and limit register is 800, then the program can legally access all addresses
from 1000 through 1800 (inclusive)1.
● The memory management unit (MMU) is responsible for translating logical
addresses generated by the CPU into physical addresses in the main memory. The
MMU uses the values in the base and limit registers to check if an address is
within the valid range. If not, it raises an exception or a trap.
● Memory protection can be implemented using different methods, such as keys,
rings, or paging. Keys are based on special codes that indicate which pages of
memory belong to which processes. Rings are based on a hierarchy of protection
levels that restrict what operations a process can perform on its memory. Paging is
based on dividing the physical memory into fixed-size units called frames and
mapping logical addresses to frames using a page table
MFT stands for Memory File Table. It is a data structure that stores information about
every file and directory on an NTFS volume1. It allows the operating system to quickly
locate and access files and directories, as well as to manage disk space allocation and file
system operations.
MVT stands for Memory Virtualization Technology. It is a feature of some processors
that allows them to create multiple virtual memory spaces from a single physical memory
space2. This enables the operating system to run multiple processes or applications in
isolated and secure environments, as well as to improve performance and efficiency
A Translation look aside buffer can be defined as a memory cache which can be used to
reduce the time taken to access the page table again and again.
It is a memory cache which is closer to the CPU and the time taken by CPU to access
TLB is lesser then that taken to access main memory.
In other words, we can say that TLB is faster and smaller than the main memory but
cheaper and bigger than the register.
TLB follows the concept of locality of reference which means that it contains only the
entries of those many pages that are frequently accessed by the CPU.
In translation look aside buffers, there are tags and keys with the help of which, the
mapping is done.
TLB hit is a condition where the desired entry is found in translation look aside buffer. If
this happens then the CPU simply access the actual location in the main memory.
9. Discuss about forward mapped page table.
10. Explain about forward mapped page table.
Forward-mapped page tables are also known as hierarchical paging or multilevel
paging, because they form a tree-like structure with multiple levels. This
technique allows for more efficient use of memory space, as each level of the
tree can be stored in a single frame or multiple frames, depending on its size
Consider a system having 32-bit logical address space and a page size of 1 KB and it is
further divided into:
As we page the Page table, the page number is further divided into :
P2 indicates the displacement within the page of the Inner page Table.
Below given figure below shows the Address Translation scheme for a two-level page
table
Three Level Page Table
For a system with 64-bit logical address space, a two-level paging scheme is not
appropriate. Let us suppose that the page size, in this case, is 4KB.If in this case, we will
use the two-page level scheme then the addresses will look like this:
Thus in order to avoid such a large table, there is a solution and that is to divide the
outer page table, and then it will result in a Three-level page table:
Causes of Thrashing
Thrashing affects the performance of execution in the Operating system. Also, thrashing results
in severe performance problems in the Operating system.
When the utilization of CPU is low, then the process scheduling mechanism tries to load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the
available number of frames in the memory. Allocation of the limited amount of frames to each
process.
Whenever any process with high priority arrives in the memory and if the frame is not freely
available at that time then the other process that has occupied the frame is residing in the frame
will move to secondary storage and after that this free frame will be allocated to higher priority
process.
We can also say that as soon as the memory fills up, the process starts spending a lot of time for
the required pages to be swapped in. Again the utilization of the CPU becomes low because most
of the processes are waiting for pages.
Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in
the Operating system.
Unit-3b
2 marks
1.Show the role of the victim frame. (OR).Related the role of Victim frame.
The victim frame is the page frame that is selected by the operating system to be replaced by a
new page when there is no free frame available in the physical memory.
The operating system must use any page replacement algorithm in order to select the victim
frame.
The operating system must then write the victim frame to the disk, then read the desired page
into the frame, and then update the page tables. All these require double the disk access time
2.Show the two parts of the addresses generated by the CPU.
The addresses generated by the CPU consist of two parts: the logical address and the physical
address. The logical address is the address seen by the process and is relative to the program’s
address space. The physical address is the actual address in main memory where data is stored
3.Demonstrate the hardware support for relocation and limit registers.
Unit-4
4 marks
1. Discuss about open file table and system wide file table maintained by the operating
system.
open file table and system wide file table maintained by the operating system. These
are two types of file tables that are used to manage the files and directories on a
physical storage device such as a hard disk or a flash drive.
A file table is a data structure that stores information about the files and directories,
such as their names, locations, sizes, permissions, and attributes.
An open file table is a file table that contains information about the files that are
currently opened by one or more processes.
A process is an instance of a program that is running on the system. Each process has
its own set of resources, such as memory, CPU, and files. An open file table allows a
process to access the files that it needs for its execution.
A system wide open file table is a file table that contains information about all the files
that are open by all the processes on the system. It is a global file table that is shared by
all the processes. A system wide open file table allows a process to access any file on
the system without knowing its exact location or name.
2. Summarize the pieces of information which are associated with open file table.
The open file table (OFT) is a data structure maintained by the operating system to
track all open files. Each entry in the OFT contains information about a single open file,
including:
File descriptor (FD): A unique identifier for the open file.
File pointer: The current position in the file.
File mode: The permissions for the open file (read-only, write-only, read-write).
File flags: Additional information about the open file, such as whether it is locked or
buffered.
Reference count: The number of processes that have the file open.
Pointer to the inode: The inode is a data structure that contains information about the
file, such as its size, location on disk, and permissions.
The OFT is used by the operating system to manage file access and sharing. For
example, when a process opens a file, the operating system creates a new entry in the
OFT and assigns it a unique FD. The process can then use the FD to read from or write
to the file. When a process closes a file, the operating system decrements the reference
count for the file in the OFT. If the reference count reaches zero, the operating system
deletes the entry from the OFT and closes the file
3. Summarize the problems with the following file access methods. Sequential access,
Direct access and linkedaccess.
● Sequential access: This method requires the user to read or write files in a
sequential order, from the beginning to the end or from the end to the beginning.
This method is simple and efficient for files that are not frequently accessed, but it
is slow and inefficient for files that are frequently accessed, as it involves
scanning the entire file or directory. This method also wastes disk space, as it does
not allow random access to any part of the file12.
● Direct access: This method allows the user to access any part of a file or directory
by specifying its logical address, such as a file name or a path. This method is fast
and efficient for files that are frequently accessed, as it does not require scanning
the entire file or directory. However, this method is complex and expensive for
files that are not frequently accessed, as it requires maintaining a large index of all
the files and directories in the system. This method also wastes disk space, as it
does not allow sequential access to any part of the file12.
● Indexed access: This method combines the features of sequential and direct
access methods. It allows the user to access any part of a file or directory by
specifying its logical address, but it also maintains an index of all the files and
directories in the system. The index can be stored in a separate file or directory, or
embedded in each file or directory. The index can be updated periodically or
dynamically when a file is added, deleted, or modified. This method is fast and
efficient for files that are frequently accessed, as it allows direct access to any part
of the file without scanning the entire file or directory. However, this method is
complex and expensive for files that are not frequently accessed, as it requires
maintaining an index of all the files and directories in the system. This method
also wastes disk space, as it requires storing an index for each file or directory
● Sequential access requires more complex and difficult implementation and use
than sequential file access, which is another method of accessing files3.
● Sequential access can also be slower and less efficient than direct file access for
random access operations or when working with large files3.
● Sequential access files are usually stored on secondary storage devices, such as
hard disks or flash drives, which have a fixed size and order of records
5. Cite the problems with Acyclic-Graph directories and Two-Level directory structure
schemes.
Some of the problems with acyclic-graph directories and two-level directory structure
schemes are:
8. Interpret how file sharing is done when operating system supports multiple users.
● File sharing is the process of allowing multiple users to access and manipulate
files on a shared storage device, employing various methods depending on the
operating system and system configuration.
● One approach involves using file systems that support multiple users and
permissions, such as FAT32, NTFS, ext4, and HFS+. These file systems enable
users to create directories, assign different access rights, and share files by
specifying directory and file names.
●
● Another method utilizes network protocols like FTP, HTTP, SMB, NFS, and
Samba, allowing communication between computers over a network. These
protocols facilitate file transfers between computers on the same or different
networks.
● While file sharing offers benefits for collaboration and data backup, it introduces
challenges and risks related to security and privacy. Users must be cautious about
sharing files, granting appropriate permissions, and safeguarding data from
unauthorized access. Additionally, awareness of legal and ethical implications,
especially regarding personal or sensitive data, is crucial in the file-sharing
context.
12. Discuss the advantages and disadvantages of disk space Linked Allocation method?
The disk space linked allocation method is a way of storing files on a disk that does not
require contiguous blocks. Instead, each file is represented by a linked list of disk
blocks, and each block contains a pointer to the next block in the file. Some of the
advantages and disadvantages of this method are:
13. Interpret the need of maintaining a free-space list and explain how it is implemented
by using bit vector.
Or
14. Interpret the need of maintaining a free-space list and explain how it is implemented
by using Grouping.
This technique is used to implement the free space management. When the free space is
implemented as the bitmap or bit vector then each block of the disk is represented by a bit.
When the block is free its bit is set to 1 and when the block is allocated the bit is set to 0.
The main advantage of the bitmap is it is relatively simple and efficient in finding the first free
block and also the consecutive free block in the disk. Many computers provide the bit
manipulation instruction which is used by the users.
(number of bits per words) X (number of 0-value word) + Offset of first 1 bit
For Example: Apple Macintosh operating system uses the bitmap method to allocate the disk
space.
Disadvantages
● This technique requires a special hardware support to find the first 1 in a word it is not 0.
● This technique is not useful for the larger disks.
Example
Consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,26, and 27 are free and the
rest of the blocks are allocated. The free-space bitmap would be:
001111001111110001100000011100000
15. Interpret the need of maintaining a free-space list and explain how it is implemented
by using grouping.
Grouping
This is also the technique of free space management. In this, there is a modification of the
free-list approach which stores the address of the n free blocks. In this the first n-1 blocks are
free but the last block contains the address of the n blocks. When we use the standard linked list
approach the addresses of a large number of blocks can be found very quickly. In this approach,
we cannot keep a list of n free disk addresses but we keep the address of the first free block.
16. Interpret the need of maintaining a free-space list and explain how it is implemented
by using linked list.
Linked List
This is another technique for free space management. In this linked list of all the free block is
maintained. In this, there is a head pointer which points the first free block of the list which is
kept in a special location on the disk. This block contains the pointer to the next block and the
next block contain the pointer of another next and this process is repeated. By using this disk it is
not easy to search the free list. This technique is not sufficient to traverse the list because we
have to read each disk block that requires I/O time. So traversing in the free list is not a frequent
action.
Advantages
● Whenever a file is to be allocated a free block, the operating system can simply allocate
the first block in free space list and move the head pointer to the next free block in the
list.
Disadvantages
In our earlier example, we see that keep block 2 is the first free block which points to another
block which contains the pointer of the 3 blocks and 3 blocks contain the pointer to the 4 blocks
and this contains the pointer to the 5 block then 5 block contains the pointer to the next block and
this process is repeated at the last .
Unit-4
2 Marks
The width of the ribbon varies from 4mm to 1 Inch and it has storage capacity 100 MB to 200
GB.
Advantages :
1. These are inexpensive, i.e., low cost memories.
2. It provides backup or archival storage.
3. It can be used for large files.
4. It can be used for copying from disk files.
5. It is a reusable memory.
6. It is compact and easy to store on racks.
Disadvantages :
● Sequential access is the disadvantage, means it does not allow access randomly or
directly.
● It requires caring to store, i.e., vulnerable humidity, dust free, and suitable environment.
● It stored data cannot be easily updated or modified, i.e., difficult to make updates on data.
Disk drives are a type of external storage device used in computers to store data and programs.
They consist of spinning disks, called platters, that are coated with a magnetic material.
A read/write head is used to read and write data to the disk. In this essay, we will explore the
organization and structureof disk drives, including the components that make up a disk drive,
how data is organized on the disk,and the performance characteristics of disk drives
To summarize, rotational latency is the delay between the arrival of a disk request and the start of
data transfer. It is determined by the angular position of the disk and the rotational speed of the
disk. Rotational latency can be reduced by using disks with higher RPM or by scheduling disk
requests in a way that minimizes the disk arm movement.
7. Discuss about the steps in disk initialization.
8. Summarize the steps in disk initialization.
Disk initialization is the process of preparing a disk for use by Windows. It involves
assigning a drive letter, a partition style, and a file system to the disk. Here are the
general steps for disk initialization:
What is removable media?Removable media is any type of storage device that can be removed
from a computer while the system is running. Removable media makes it easy for a user to move
data from one computer to another.
● CDs
● DVDs
● Blu-ray discs
● USB drives
● SD cards
● floppy disks
● magnetic tape
In a storage context, the main advantage of removable media is that it can deliver the fast data
backup and recovery times associated with storage area networks. Removable storage media can
also help organizations meet corporate backup and recovery requirements because it is
portable.Portability is also one of the technology's main drawbacks. Ransomware attacks can be
transferred from computer to computer by removable media such as a USB drive