0% found this document useful (0 votes)
13 views

OS All Units 4,2 Marks Answers

operating system 4marsk and the two marks

Uploaded by

RAHUL M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

OS All Units 4,2 Marks Answers

operating system 4marsk and the two marks

Uploaded by

RAHUL M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit-1

4 marks
1. Cite the objectives of the operating system?
The operating system acts as a bridge between the user of a computer system and the
computer hardware. All of the applications required for your programs to utilize the
computer hardware are located on top of the operating system.
The following are the main objectives of an operating system:

● Efficiency
● Hardware abstraction
● Convenience
● System resource management

An operating system is a program that acts as an interface between the user and the computer
hardware and controls the execution of all kinds of programs.
The operating system performs all the basic tasks like file management, memory management,
process management, handling input and output, and controlling peripheral devices such as disk
drives and printers.
Some popular Operating Systems include Linux Operating System, Windows Operating System,
VMS, OS/400, AIX, z/OS, etc.

2. Discuss the Operating System viewed as a resource allocator & control program?
System ViewThe OS may also be viewed as just a resource allocator. A computer system
comprises various sources, such as hardware and software, which must be managed effectively.
The operating system manages the resources, decides between competing demands, controls the
program execution, etc. According to this point of view, the operating system's purpose is to
maximize performance. The operating system is responsible for managing hardware resources
and allocating them to programs and users to ensure maximum performance.

From the user point of view, we've discussed the numerous applications that require varying
degrees of user participation. However, we are more concerned with how the hardware interacts
with the operating system than with the user from a system viewpoint. The hardware and the
operating system interact for a variety of reasons, including:

1. Resource Allocation
The hardware contains several resources like registers, caches, RAM, ROM,
CPUs, I/O interaction, etc. These are all resources that the operating system needs when
an application program demands them. Only the operating system can allocate resources,
and it has used several tactics and strategies to maximize its processing and memory
space. The operating system uses a variety of strategies to get the most out of the
hardware resources, including paging, virtual memory, caching, and so on. These are very
important in the case of various user viewpoints because inefficient resource allocation
may affect the user viewpoint, causing the user system to lag or hang, reducing the user
experience.

2. Control Program
The control program controls how input and output devices (hardware) interact
with the operating system. The user may request an action that can only be done with I/O
devices; in this case, the operating system must also have proper communication, control,
detect, and handle such devices.
3. Discuss about bootstrap program

A bootstrap program is the first code that is executed when a computer system is
started. It is a small, but essential program that is responsible for loading the rest of the
operating system into memory. The bootstrap program is typically stored in a
non-volatile memory such as ROM or flash memory, so that it can be accessed even
when the computer is first turned on.

The bootstrap process is a chain of events, where each step loads and executes the
next program in the sequence. The first step is for the CPU to load the bootstrap
program from ROM into memory. The bootstrap program then performs a number of
tasks, including:

● Initializing the hardware: This includes setting up the memory map, configuring
the CPU registers, and enabling interrupts.
● Loading the operating system kernel: The bootstrap program locates the
operating system kernel on a storage device, such as a hard drive or SSD, and
loads it into memory.
● Transferring control to the kernel: Once the kernel is loaded, the bootstrap
program transfers control to it. The kernel then takes over the boot process and
completes the initialization of the operating system.

4. Discuss computer system architecture.


5. Discuss the three major activities of an operating system with regard to memory
management?
● Memory allocation: The operating system tracks the status of each memory
location, either allocated or free, and provides the empty memory spaces to
incoming processes as required.
● Swapping: The operating system can remove a process from the main
memory and into the secondary storage to free up space for other processes.
● Paging: The operating system can create virtual memory that is more than the
actual memory available by dividing the memory into fixed-sized pages and
storing them in secondary storage.

● Segmentation: The operating system can divide the memory into


variable-sized segments that correspond to logical units of a process, such as
code, data, or stack.
Memory Management
● Memory management refers to management of Primary Memory or Main Memory. Main
memory is a large array of words or bytes where Each word or byte has its own address.
Main memory provides fast storage that can be accessed directly by the CPU. For a
program to be executed, it must be in the main memory. An Operating System does the
following activities for memory management:
➢ Keeps tracks of primary memory, i.e., what part of it are in use by whom, what parts
are not in use.
➢ In multiprogramming, the OS decides which process will get memory when and how
much.
➢ Allocates the memory when a process requests it to do so.
➢ De-allocates the memory when a process no longer needs it or has been terminated.

6. Discuss the major activities of an operating systems with regard to process management?
Process management is the responsibility of the operating system to manage all
running processes of the system1. The operating system performs various activities in
regard to process management, such as
● Creating and deleting processes
● Allocating and de-allocating the processor (CPU) to processes
● Scheduling processes and switching between them
● Suspending and resuming processes
● Managing deadlock and termination of processes

● Providing mechanisms for process synchronization and communication

7. Compare Traditional computing with client server computing.

8. Distinguish peer to peer computing and web based computing.


9. Summarize the services provided by an Operating System.
An operating system is a software that acts as an intermediary between the user and
computer hardware. It provides a platform for other application programs to work and
manages the use of the hardware and software resources. Some of the common
services provided by an operating system are:

● Program execution: The operating system loads the program into the memory
and executes it. It also handles the scheduling, synchronization, and termination
of programs.
● Input/output operations: The operating system manages the communication
between the user and the input/output devices, such as keyboard, mouse, printer,
etc. It also provides access to these devices to the programs when needed.
● File system manipulation: The operating system helps the user to create, delete,
read, write, and organize files. It also manages the storage and allocation of files
on the disk.
● Communication: The operating system enables the communication between
processes, either on the same computer or on different computers connected by
a network. It also provides security and privacy for the data transfer.
● Error detection: The operating system detects and handles the errors that may
occur in the hardware, software, or user programs. It also provides mechanisms
for recovery and prevention of errors.
● Resource allocation: The operating system allocates the resources, such as CPU,
memory, disk, etc., to the programs and processes according to their needs and
priorities. It also ensures fair and efficient use of the resources.
● Protection: The operating system protects the system from unauthorized access
and malicious attacks. It also enforces the policies and rules for the access and
use of the system resources.

10. Discuss minimum of three major services of an Operating system with regard to
Process management and device management.

An operating system is a software that acts as an intermediary between the user and
computer hardware. It provides a platform for other application programs to work and
coordinates the use of the hardware and application programs for various users1.

Some of the major services of an operating system with regard to process management
and device management are:

Process management: The operating system is responsible for creating, deleting,


suspending, resuming, synchronizing, communicating, and handling deadlocks of user
and system processes23. The operating system also decides which process gets the
CPU and for how long, using various scheduling algorithms1.

Device management: The operating system manages the input-output operations and
establishes communication between the user and device drivers. Device drivers are
software that is associated with hardware that is being managed by the OS so that the
sync between the devices works properly1. The operating system also keeps track of
the status, allocation, and deallocation of all devices, such as mouse, keyboard, scanner,
printer, and pen drives4.

Memory management: The operating system is responsible for managing the main
memory and the secondary memory of the computer system. It allocates and
deallocates memory space to processes and ensures that each process gets enough
memory to execute. It also implements memory protection and memory sharing
mechanisms to prevent errors and improve efficiency
11. Discuss the three major categories of System Calls?
System calls are the interface between a process and the operating system. They allow
a user program to request a service from the kernel, such as file access, process
creation, or interprocess communication.
Process Control: These system calls deal with the creation, termination, and
management of processes. For example, fork() creates a new process, exec() runs
an executable file, and exit() terminates a process.
● File Management: These system calls are responsible for manipulating files and
directories. For example, open() opens a file, read() reads data from a file,
write() writes data to a file, and close() closes a file.
● Device Management: These system calls are responsible for controlling and
accessing devices, such as disks, keyboards, printers, etc. For example, ioctl()
performs device-specific operations, and read() and write() can also be used
for device input and output.

12. Discuss the use of fork and exec system calls


● The fork and exec system calls are used to create and execute new processes in Linux.
● The fork system call creates a new child process that is an exact duplicate of the parent
process, except for some differences such as the process ID and the parent process ID.
● The exec system call replaces the current process image with a new process image, which
means that the new program starts execution from the entry point and the old program is
no longer running.
● The fork and exec system calls are often used together to create a new process and run a
different program in it. For example, a shell program uses fork and exec to run commands
entered by the user

The difference between fork and exec is that fork starts a new process which is a copy of the one
that calls it, while exec replaces the current process image with another (different) one. Both
parent and child processes are executed simultaneously in case of fork, while control never
returns to the original program unless there is an exec error
13. Discuss Simple Operating structure
14. Distinguish simple and layered structures

15. Discuss microkernels operating structure.


● Definition: A microkernel is a specific type of operating system kernel designed to
offer only essential services required for the operating system to function, such
as memory management and process scheduling¹.
● Objective: The primary goal of a microkernel is to maintain a small and
lightweight kernel by relocating non-essential services, like device drivers, into
user space¹.
● Structural Approach: Microkernel structures the operating system by removing
nonessential components from the kernel and implementing them as system and
user-level programs³.
● Functionality: Microkernel provides minimal process and memory management,
along with a communication facility².
● Source: The information is derived from a conversation with Bing on 12/11/2023,
along with references from GeeksforGeeks¹, RCET³, and Javatpoint⁴.

16. Distinguish microkernels and modules operating structures


Unit-1
(2 marks)
1. Relate operating system and computer hardware
● An operating system (OS) is a software that manages the computer hardware
and provides an interface for the user and the application programs.
● The hardware consists of the physical components of the computer system,
such as the CPU, memory, disk, keyboard, mouse, monitor, etc.
● The OS acts as an intermediary between the hardware and the software, ensuring
that they can communicate and function properly.
● The OS also performs tasks such as resource allocation, input/output operations,
program execution, and security.
2. With respect to computing environments, related operating system and computer
hardware.
Computing environments encompass technology infrastructure for software applications.
Operating systems manage hardware, providing interfaces and key functions for efficient
execution.
Computer hardware includes essential components for processing, storage, and user interaction.
Each component plays a crucial role in creating a functional computing system.
3. Demonstrate the objective of multiprogramming

4. Relate the system calls with system utilities


Resource Access:
a. Connection: System programs frequently leverage system calls to access
system resources and services that user programs cannot directly reach³⁴.
b. Example: The cat system program may use the system call read to fetch
and display file contents⁴.
Interface Levels:
c. Low-Level vs. High-Level: System calls provide a low-level interface to the
operating system, typically invoked using assembly instructions or library
functions³⁴.
d. Example: In contrast, system programs furnish a high-level interface to
users, executed through commands or graphical user interfaces⁴.
Platform Specificity and Portability:
e. System Calls: Specific to the operating system and hardware platform,
subject to variations in number, name, and functionality³⁴.
f. Example: System programs exhibit portability across diverse operating
systems and architectures, adhering to a standard approach with common
system calls or libraries

5. Demonstrate the main purpose of operating system


There are two basic purposes of an operating system:
● It manages hard ware and software resources f computer. The resources include
processor, memory and disk space etc.
● It provides a consistent way for application to interact with the hardware without
knowing all the details of the hardware.

6. Relate computer organization and computer architecture in operating systems


Computer architecture is concerned with optimizing the performance of a computer
system and ensuring that it can execute instructions quickly and efficiently. On the other
hand, computer organization refers to the operational units and their interconnections
that implement the architecture specification.
7. Demonstrate about the system programs
System Programming can be defined as the act of building Systems Software using System
Programming Languages. According to Computer Hierarchy, one which comes at last is
Hardware. Then it is Operating System, System Programs, and finally Application Programs.
Program Development and Execution can be done conveniently in System Programs. Some of
the System Programs are simply user interfaces, others are complex. It traditionally lies between
the user interface and system calls.
8. Report the inconvenience that a user can face while interacting with a computer
system, which is without an operating system?
The primary goal of an Operating System is to provide a user-friendly and convenient
environment. We know that it is not compulsory to use the Operating System, but things become
harder when the user has to perform all the process scheduling and converting the user code into
machine code is also very difficult. So, we make the use of an Operating System to act as an
intermediate between us and the hardware. All you need to do is give commands to the
Operating System and the Operating System will do the rest for you. So, the Operating System
should be convenient to use.
unit-2
4 marks
1. Summarize the need of switching a process from one state to another with the help of a
diagram. (OR)
4. Discuss the process involved and the need of context switch.
The need of switching a process from one state to another is to manage the execution
of multiple processes in an operating system.
A process can be in one of the following states: new, ready, running, waiting, or
terminated. The process state diagram shows how a process can move from one state
to another based on certain events and actions.

2. Discuss about structure of a process in memory with a brief note on sections maintained.
Process in an Operating System

A process is actively running software or a computer code. Any procedure must be carried out in
a precise order. An entity that helps in describing the fundamental work unit that must be
implemented in any system is referred to as a process.

In other words, we create computer programs as text files that, when executed, create processes
that carry out all of the tasks listed in the program.
When a program is loaded into memory, it may be divided into the four components stack, heap,
text, and data to form a process. The simplified depiction of a process in the main memory is
shown in the diagram below.

Stack:The process stack stores temporary information such as method or function arguments, the
return address, and local variables.

Heap:This is the memory where a process is dynamically allotted while it is running.

Text:This consists of the information stored in the processor's registers as well as the most recent
activity indicated by the program counter's value.

Data:In this section, both global and static variables are discussed.

3. Distinguish between CPU bound, I/O bound processes.


5. Summarize the use of fork and exec system calls.

6. Summarize the use of exit and wait system calls.


7. Compare and contrast Single-threaded and multi-threaded process
8. Summarize the need of scheduling in time shared multi processing systems and the
process involved
Scheduling in time shared multi processing systems is the process of allocating CPU
time to multiple users or processes that share the same computer. The main goal of
scheduling is to reduce the response time and increase the system throughput and
efficiency. The process involved in scheduling are:

● The system maintains a queue of ready processes or tasks that are waiting for
CPU time.
● The system uses a CPU scheduling algorithm to select a process or task from
the queue and assign it to a processor for execution. The CPU scheduling
algorithm may consider factors such as priority, fairness, and resource utilization.
● The system uses a timer to interrupt the execution of the current process or task
after a fixed amount of time, called a time slice or quantum. This prevents any
process or task from monopolizing the CPU and allows the system to switch to
another process or task.
● The system saves the state of the interrupted process or task and places it back
in the queue, unless it has completed or requested I/O. The system then repeats
the previous steps for the next process or task in the queue.

9. Delineate the role of long term scheduler in context switching.


The role of the long term scheduler in context switching is to determine which processes are
allowed to enter the ready queue and compete for the CPU. The long term scheduler controls the
degree of multiprogramming, i.e., the number of processes that are present in the main memory
at any point in time. The long term scheduler selects processes from the job pool, which is the
collection of all processes waiting to be executed, and loads them into the main memory. The
long term scheduler runs infrequently, and may be invoked when a process terminates or when
the system load changes. The long term scheduler affects the system performance and the
average response time of the processes. A good long term scheduler should balance the mix of
CPU-bound and I/O-bound processes in the ready queue, so that the CPU utilization and the
throughput are optimized
10. List the functions of the Dispatcher Module.
The dispatcher module is a component of the operating system that is responsible for
transferring control of the CPU to the process selected by the short-term scheduler.
Some of the functions of the dispatcher module are:

● Switching context: The dispatcher module saves the state of the current process
and restores the state of the next process to run. This involves updating the
program counter, registers, and memory map of the processes.
● Switching to user mode: The dispatcher module changes the mode of the CPU
from kernel mode to user mode, which allows the process to access the
resources and instructions available to the user level.
● Jumping to the proper location in the user program to restart that program: The
dispatcher module sets the program counter of the CPU to the address of the
instruction that the process was executing before it was interrupted or
preempted. This allows the process to resume its execution from where it left off.
● Managing dispatch latency: The dispatcher module tries to minimize the amount
of time it takes to perform the above functions, which is known as the dispatch
latency. The dispatch latency affects the response time and throughput of the
system, so the dispatcher module should be as fast and efficient as possible.

11. Compare preemptive and non preemptive SJF cpu scheduling algorithms with an
example.

12. Discuss the process of CPU scheduling by using the Round-Robin algorithm.
CPU scheduling is the process of allocating CPU time to different processes or tasks
based on some criteria. The round-robin algorithm is one of the CPU scheduling
algorithms that assigns a fixed time slice or quantum to each process in a circular
order. The process that is currently running on the CPU will be preempted or interrupted
when its time slice expires, and the next process in the ready queue will be selected to
run. The preempted process will be added to the end of the ready queue and wait for its
next turn. This way, every process gets an equal share of the CPU time and no process
will starve.
The round-robin algorithm is simple, easy to implement, and suitable for time-sharing
systems. However, it also has some disadvantages, such as more overhead of context
switching, larger waiting time and response time, and low throughput. The performance
of the round-robin algorithm depends largely on the choice of the time quantum. If the
time quantum is too large, the algorithm will behave like the first-come-first-serve
algorithm, which is non-preemptive and may cause long waiting time for short
processes. If the time quantum is too small, the algorithm will cause frequent context
switches, which will increase the overhead and reduce the CPU utilization. Therefore,
choosing an optimal time quantum is important for the efficiency of the round-robin
algorithm.

13. Discuss the advantages of multilevel queue scheduling algorithms.


Advantages of Multilevel Queue Scheduling
● Efficient Resource Utilization: Multilevel queue scheduling allows the system
to allocate resources more efficiently by grouping processes with similar
resource requirements into separate queues.
● Improved Response Time: By assigning higher priority to interactive
processes that require a fast response time, multilevel queue scheduling can
reduce response time and improve system performance.
● Low Scheduling Overhead: As MLQ assigns permanent queue to the
processes, it has an advantage of low scheduling overhead.

● Different Scheduling Methods: We can use MLQ to apply different scheduling


methods to distinct processes.

14. Discuss the problem of starvation in priority cpu scheduling algorithm and explain a
simple solution to it.
Starvation is a problem that occurs in priority cpu scheduling algorithm when a low-priority
process is indefinitely blocked from accessing the cpu because of a continuous stream of
higher-priority processes. This can lead to poor performance and unfair treatment of the
low-priority process. A simple solution to this problem is to use aging, which is a technique of
gradually increasing the priority of processes that wait in the system for a long time. This way,
the low-priority process will eventually have a high enough priority to get the cpu and avoid
starvation. Aging can be implemented by adding a fixed value to the priority of each waiting
process at regular intervals. However, aging also has some limitations, such as increased
complexity, overhead, and unpredictable behavior. Therefore, the aging rate should be set
appropriately to balance the trade-off between fairness and efficiency.
15. Discuss Interprocess communication using Message passing technique.

16. Explore the differences in multilevel queue scheduling and Multilevel Feedback
scheduling algorithms
Unit-2
2marks
1.How a parent process would be aware of its child termination?Explain.
2.The following two processes P1 and P2 share a variable B with an initial value of 2.
On concurrent execution. Pl(){ B++; } P2(){ B--;}The number of distinct
values that B can possibly take after the execution is........
3.Discover the next process state when an interrupt occurs during a process is in its
running state and explain the scenario.
4.How a parent process would be aware od its child termination?Explain.
5.What is state save and state restore with respective to context switch.
6.Discover the next process state when an interrupt occurs during a processs is in its
running state and explain the scenario.
unit-3B
4 marks
1. Discuss about base and limit registers.
2. Summarize about base and limit registers.
● Base and limit registers are two types of registers that are used for memory
protection in operating systems. Memory protection is a mechanism that prevents
a process from accessing memory regions that are not allocated or authorized for
it. This helps to ensure the security and stability of the system.
● The base register holds the smallest legal physical memory address; the limit
register specifies the size of the range. For example, if the base register holds
1000, and limit register is 800, then the program can legally access all addresses
from 1000 through 1800 (inclusive)1.
● The memory management unit (MMU) is responsible for translating logical
addresses generated by the CPU into physical addresses in the main memory. The
MMU uses the values in the base and limit registers to check if an address is
within the valid range. If not, it raises an exception or a trap.
● Memory protection can be implemented using different methods, such as keys,
rings, or paging. Keys are based on special codes that indicate which pages of
memory belong to which processes. Rings are based on a hierarchy of protection
levels that restrict what operations a process can perform on its memory. Paging is
based on dividing the physical memory into fixed-size units called frames and
mapping logical addresses to frames using a page table

3. Discuss about the role of relocation register.


4. Summarize about the role of relocation register.
A relocation register is a hardware element that holds a constant value that is added to the
logical address of each memory location in a computer program. The relocation register is used
to map the logical address to the physical address in memory, which is known only to the main
memory. The relocation register helps in memory protection, which prevents a process from
accessing unallocated or unauthorized memory. The relocation register also helps in fault
tolerance and security, as it allows processes to run in different segments of memory with
different protection levels
5. Discuss about MFT and MVT.
6. Summarize about MFT and MVT.
MFT and MVT are two different memory management techniques in operating systems.
Here is a brief summary of each technique:

MFT stands for Memory File Table. It is a data structure that stores information about
every file and directory on an NTFS volume1. It allows the operating system to quickly
locate and access files and directories, as well as to manage disk space allocation and file
system operations.
MVT stands for Memory Virtualization Technology. It is a feature of some processors
that allows them to create multiple virtual memory spaces from a single physical memory
space2. This enables the operating system to run multiple processes or applications in
isolated and secure environments, as well as to improve performance and efficiency

7. Discuss about TLB.


8. Summarize about TLB.

A Translation look aside buffer can be defined as a memory cache which can be used to
reduce the time taken to access the page table again and again.

It is a memory cache which is closer to the CPU and the time taken by CPU to access
TLB is lesser then that taken to access main memory.

In other words, we can say that TLB is faster and smaller than the main memory but
cheaper and bigger than the register.

TLB follows the concept of locality of reference which means that it contains only the
entries of those many pages that are frequently accessed by the CPU.

In translation look aside buffers, there are tags and keys with the help of which, the
mapping is done.

TLB hit is a condition where the desired entry is found in translation look aside buffer. If
this happens then the CPU simply access the actual location in the main memory.
9. Discuss about forward mapped page table.
10. Explain about forward mapped page table.
Forward-mapped page tables are also known as hierarchical paging or multilevel
paging, because they form a tree-like structure with multiple levels. This
technique allows for more efficient use of memory space, as each level of the
tree can be stored in a single frame or multiple frames, depending on its size

Two Level Page Table

Consider a system having 32-bit logical address space and a page size of 1 KB and it is
further divided into:

● Page Number consisting of 22 bits.


● Page Offset consisting of 10 bits.

As we page the Page table, the page number is further divided into :

● Page Number consisting of 12 bits.


● Page Offset consisting of 10 bits.

Thus the Logical address is as follows:

In the above diagram,

P1 is an index into the Outer Page table.

P2 indicates the displacement within the page of the Inner page Table.

As address translation works from outer page table inward so is known as


forward-mapped Page Table.

Below given figure below shows the Address Translation scheme for a two-level page
table
Three Level Page Table

For a system with 64-bit logical address space, a two-level paging scheme is not
appropriate. Let us suppose that the page size, in this case, is 4KB.If in this case, we will
use the two-page level scheme then the addresses will look like this:

Thus in order to avoid such a large table, there is a solution and that is to divide the
outer page table, and then it will result in a Three-level page table:

11. Discuss about virtual memory.


12. Summarize about virtual memory.
● Virtual memory is a memory management technique used by operating systems (OS) to
allow a computer to use more memory than it physically has.
● It works by using a part of the secondary storage, such as a hard disk or SSD, as an
extension of the main memory, also known as RAM.
● The OS can move some of the data and programs that are not currently in use from RAM
to the secondary storage, and bring them back when needed.
● This way, the OS can create the illusion of having a large amount of memory for the
programs to run, even if the actual RAM is limited.

13. Discuss about belady's anomaly.


(OR)
14. Summarize belady's anomaly.

15. Discuss about the causes of thrashing.


(OR)
16. Summarize the causes of thrashing.

Causes of Thrashing

Thrashing affects the performance of execution in the Operating system. Also, thrashing results
in severe performance problems in the Operating system.

When the utilization of CPU is low, then the process scheduling mechanism tries to load many
processes into the memory at the same time due to which degree of Multiprogramming can be
increased. Now in this situation, there are more processes in the memory as compared to the
available number of frames in the memory. Allocation of the limited amount of frames to each
process.

Whenever any process with high priority arrives in the memory and if the frame is not freely
available at that time then the other process that has occupied the frame is residing in the frame
will move to secondary storage and after that this free frame will be allocated to higher priority
process.

We can also say that as soon as the memory fills up, the process starts spending a lot of time for
the required pages to be swapped in. Again the utilization of the CPU becomes low because most
of the processes are waiting for pages.

Thus a high degree of multiprogramming and lack of frames are two main causes of thrashing in
the Operating system.
Unit-3b
2 marks
1.Show the role of the victim frame. (OR).Related the role of Victim frame.
The victim frame is the page frame that is selected by the operating system to be replaced by a
new page when there is no free frame available in the physical memory.
The operating system must use any page replacement algorithm in order to select the victim
frame.
The operating system must then write the victim frame to the disk, then read the desired page
into the frame, and then update the page tables. All these require double the disk access time
2.Show the two parts of the addresses generated by the CPU.
The addresses generated by the CPU consist of two parts: the logical address and the physical
address. The logical address is the address seen by the process and is relative to the program’s
address space. The physical address is the actual address in main memory where data is stored
3.Demonstrate the hardware support for relocation and limit registers.
Unit-4
4 marks

1. Discuss about open file table and system wide file table maintained by the operating
system.
open file table and system wide file table maintained by the operating system. These
are two types of file tables that are used to manage the files and directories on a
physical storage device such as a hard disk or a flash drive.
A file table is a data structure that stores information about the files and directories,
such as their names, locations, sizes, permissions, and attributes.

An open file table is a file table that contains information about the files that are
currently opened by one or more processes.

A process is an instance of a program that is running on the system. Each process has
its own set of resources, such as memory, CPU, and files. An open file table allows a
process to access the files that it needs for its execution.

A system wide open file table is a file table that contains information about all the files
that are open by all the processes on the system. It is a global file table that is shared by
all the processes. A system wide open file table allows a process to access any file on
the system without knowing its exact location or name.

2. Summarize the pieces of information which are associated with open file table.
The open file table (OFT) is a data structure maintained by the operating system to
track all open files. Each entry in the OFT contains information about a single open file,
including:
File descriptor (FD): A unique identifier for the open file.
File pointer: The current position in the file.
File mode: The permissions for the open file (read-only, write-only, read-write).
File flags: Additional information about the open file, such as whether it is locked or
buffered.
Reference count: The number of processes that have the file open.
Pointer to the inode: The inode is a data structure that contains information about the
file, such as its size, location on disk, and permissions.
The OFT is used by the operating system to manage file access and sharing. For
example, when a process opens a file, the operating system creates a new entry in the
OFT and assigns it a unique FD. The process can then use the FD to read from or write
to the file. When a process closes a file, the operating system decrements the reference
count for the file in the OFT. If the reference count reaches zero, the operating system
deletes the entry from the OFT and closes the file
3. Summarize the problems with the following file access methods. Sequential access,
Direct access and linkedaccess.
● Sequential access: This method requires the user to read or write files in a
sequential order, from the beginning to the end or from the end to the beginning.
This method is simple and efficient for files that are not frequently accessed, but it
is slow and inefficient for files that are frequently accessed, as it involves
scanning the entire file or directory. This method also wastes disk space, as it does
not allow random access to any part of the file12.
● Direct access: This method allows the user to access any part of a file or directory
by specifying its logical address, such as a file name or a path. This method is fast
and efficient for files that are frequently accessed, as it does not require scanning
the entire file or directory. However, this method is complex and expensive for
files that are not frequently accessed, as it requires maintaining a large index of all
the files and directories in the system. This method also wastes disk space, as it
does not allow sequential access to any part of the file12.
● Indexed access: This method combines the features of sequential and direct
access methods. It allows the user to access any part of a file or directory by
specifying its logical address, but it also maintains an index of all the files and
directories in the system. The index can be stored in a separate file or directory, or
embedded in each file or directory. The index can be updated periodically or
dynamically when a file is added, deleted, or modified. This method is fast and
efficient for files that are frequently accessed, as it allows direct access to any part
of the file without scanning the entire file or directory. However, this method is
complex and expensive for files that are not frequently accessed, as it requires
maintaining an index of all the files and directories in the system. This method
also wastes disk space, as it requires storing an index for each file or directory

4. Interpret sequential access on a direct access file.


Sequential access on a direct access file means that the computer system can read or write
data to any location within the file, without having to read or write all the records that
come before it.
This is also known as random access. Sequential access is useful for applications that
need quick and efficient access to specific records or data elements within a file.
For example, in a database application, we may need to quickly retrieve customer data
based on a specific customer ID.
Sequential access can quickly access the record containing the customer data without
having to read through all the records that come before it
Some additional sentences are:

● Sequential access requires more complex and difficult implementation and use
than sequential file access, which is another method of accessing files3.
● Sequential access can also be slower and less efficient than direct file access for
random access operations or when working with large files3.
● Sequential access files are usually stored on secondary storage devices, such as
hard disks or flash drives, which have a fixed size and order of records

5. Cite the problems with Acyclic-Graph directories and Two-Level directory structure
schemes.
Some of the problems with acyclic-graph directories and two-level directory structure
schemes are:

● Acyclic-graph directories are more complex and difficult to implement than


two-level directories. They require more disk space and memory to store the links
between files and directories. They also introduce the possibility of cycles in the
directory structure, which can cause confusion and errors12.
● Two-level directory structure schemes are simpler and easier to implement than
acyclic-graph directories. They allow users to have their own files and
subdirectories in a hierarchical manner. However, they have some limitations,
such as:
● They cannot group the same type of files together, such as all the files of a user or
a project1.
● They may have name collisions if two users or projects have the same name for
their files or subdirectories1.
● They may not be scalable if the number of users or projects increases

6. Cite the advantages of tree-Structured directories when compared with single-level


and Two-level directorystructure schemes.
● Tree-structured directories are a type of directory structure in which the directory
entry may be a sub-directory or a file. They have some advantages over
single-level and two-level directory structures, such as:
● They can group the same type of files into one directory, which makes it easier to
organize and manage files. For example, you can create a directory for each
department in an organization and store all the files related to that department in
that directory1.
● They can reduce the limitations of the two-level directory structure, which does
not allow creating sub-directories in a directory. For example, you can create a
tree-structured directory for each project and store all the files related to that
project in its sub-directories2.
● They can improve the efficiency of file access, as they allow users to navigate
through the directories using a hierarchical structure. This makes it easier to
locate and access the files they need

7. Interpret the need of protection in file system.


The need to protect files is a direct result of the ability to access files.
● Protection in file system is the need of ensuring that files are not
accessed, modified, or deleted by unauthorized users or programs. It is
important for maintaining the security and integrity of data, as well as
preventing data breaches and other security incidents. Some of the
reasons why protection in file system is needed are:
● To prevent unauthorized access to sensitive or confidential information
stored in files. For example, a bank account file may contain personal and
financial details of customers, which should not be exposed to anyone
who does not have the proper authorization.
● To prevent unauthorized modification or deletion of files that may affect
the functionality or performance of the system. For example, a backup file
may contain important data that needs to be preserved and restored in
case of system failure or disaster.
● To prevent unauthorized disclosure or leakage of information stored in
files. For example, a research paper file may contain intellectual property
or trade secrets that should not be shared with competitors or
adversaries.
● To comply with legal and ethical obligations regarding the ownership and
usage of files. For example, a student file may contain academic records
and assignments that need to be protected from plagiarism and cheating.

8. Interpret how file sharing is done when operating system supports multiple users.
● File sharing is the process of allowing multiple users to access and manipulate
files on a shared storage device, employing various methods depending on the
operating system and system configuration.
● One approach involves using file systems that support multiple users and
permissions, such as FAT32, NTFS, ext4, and HFS+. These file systems enable
users to create directories, assign different access rights, and share files by
specifying directory and file names.

● Another method utilizes network protocols like FTP, HTTP, SMB, NFS, and
Samba, allowing communication between computers over a network. These
protocols facilitate file transfers between computers on the same or different
networks.
● While file sharing offers benefits for collaboration and data backup, it introduces
challenges and risks related to security and privacy. Users must be cautious about
sharing files, granting appropriate permissions, and safeguarding data from
unauthorized access. Additionally, awareness of legal and ethical implications,
especially regarding personal or sensitive data, is crucial in the file-sharing
context.

9. Summarize the fields in typical file-control block and explain them.


● A file control block (FCB) is a data structure that represents a file within a file
system. It contains information about the file, such as its name, location, size, date
created, and other attributes. The FCB is created when a file is created and is used
by the file system to manage and access the file’s contents.
The fields in a typical FCB are:
● File name: The name of the file as it appears in the directory. It can be up to 8
characters long and can contain alphanumeric characters and wildcards.
● File type: The type of the file, such as text, image, audio, etc. It can be
determined by the operating system or by the user.
● File size: The size of the file in bytes. It can be updated by the user or by the
operating system.
● File status: The status of the file, such as open, closed, deleted, etc. It can be
changed by the user or by the operating system.
● File attributes: The attributes of the file, such as read-only, hidden, system, etc.
They can be set or modified by the user or by the operating system.
● File pointer: A pointer to a location in the disk where the data of the file is
stored. It can be moved by reading or writing data from or to the disk.
● File allocation table (FAT): A table that maps logical blocks of a file to physical
blocks of a disk. It allows efficient allocation and deallocation of disk space for
files.
● Free space count: The number of free blocks in a FAT that can be used for new
files.

10. Summarize the need of in-memory file-system structures.


● In-memory file-system structures are data structures that store information about
files and directories in the main memory of the computer, rather than on disk.
This allows for faster access and manipulation of file data, as well as reducing the
overhead of disk I/O operations. Some of the benefits of using in-memory
file-system structures are:
● Improved performance: In-memory file-system structures can reduce the latency
and bandwidth of disk I/O, as well as enable parallel processing of file operations.
For example, a memory mapping hardware can handle file accesses by mapping
the file address into the process address space1.
● Reduced disk space: In-memory file-system structures can eliminate the need for
disk space for storing directory information, as well as reducing the fragmentation
and wear of disk drives. For example, a directory cache can store recently
accessed directory information in memory2.
● Enhanced security: In-memory file-system structures can protect file data from
unauthorized access or modification, by encrypting or locking the files in
memory. For example, a secure file system can use cryptographic techniques to
ensure the integrity and confidentiality of file data

11. Discuss the drawbacks of Contiguous Allocation of Disk Space?


Contiguous allocation of disk space is a method of storing files in a contiguous block of
disk space. This means that each file occupies a set of disk blocks that are next to each
other on the disk

12. Discuss the advantages and disadvantages of disk space Linked Allocation method?
The disk space linked allocation method is a way of storing files on a disk that does not
require contiguous blocks. Instead, each file is represented by a linked list of disk
blocks, and each block contains a pointer to the next block in the file. Some of the
advantages and disadvantages of this method are:
13. Interpret the need of maintaining a free-space list and explain how it is implemented
by using bit vector.
Or
14. Interpret the need of maintaining a free-space list and explain how it is implemented
by using Grouping.

This technique is used to implement the free space management. When the free space is
implemented as the bitmap or bit vector then each block of the disk is represented by a bit.
When the block is free its bit is set to 1 and when the block is allocated the bit is set to 0.
The main advantage of the bitmap is it is relatively simple and efficient in finding the first free
block and also the consecutive free block in the disk. Many computers provide the bit
manipulation instruction which is used by the users.

The calculation of the block number is done by the formula:

(number of bits per words) X (number of 0-value word) + Offset of first 1 bit

For Example: Apple Macintosh operating system uses the bitmap method to allocate the disk
space.

Assume the following are free. Rest are allocated:


Advantages

The following are the advantages of bitmap:

● This technique is relatively simple.


● This technique is very efficient to find the free space on the disk.

Disadvantages

The following are the disadvantages of bitmap:

● This technique requires a special hardware support to find the first 1 in a word it is not 0.
● This technique is not useful for the larger disks.

Example

Consider a disk where blocks 2, 3, 4, 5, 8, 9, 10, 11, 12, 13, 17, 18, 25,26, and 27 are free and the
rest of the blocks are allocated. The free-space bitmap would be:
001111001111110001100000011100000

15. Interpret the need of maintaining a free-space list and explain how it is implemented
by using grouping.
Grouping

This is also the technique of free space management. In this, there is a modification of the
free-list approach which stores the address of the n free blocks. In this the first n-1 blocks are
free but the last block contains the address of the n blocks. When we use the standard linked list
approach the addresses of a large number of blocks can be found very quickly. In this approach,
we cannot keep a list of n free disk addresses but we keep the address of the first free block.
16. Interpret the need of maintaining a free-space list and explain how it is implemented
by using linked list.
Linked List

This is another technique for free space management. In this linked list of all the free block is
maintained. In this, there is a head pointer which points the first free block of the list which is
kept in a special location on the disk. This block contains the pointer to the next block and the
next block contain the pointer of another next and this process is repeated. By using this disk it is
not easy to search the free list. This technique is not sufficient to traverse the list because we
have to read each disk block that requires I/O time. So traversing in the free list is not a frequent
action.

Advantages

The following are the advantages of linked list:

● Whenever a file is to be allocated a free block, the operating system can simply allocate
the first block in free space list and move the head pointer to the next free block in the
list.

Disadvantages

The following are the disadvantages of linked list:


● Searching the free space list will be very time consuming; each block will have to be read
from the disk, which is read very slowly as compared to the main memory.
● Not Efficient for faster access.

In our earlier example, we see that keep block 2 is the first free block which points to another
block which contains the pointer of the 3 blocks and 3 blocks contain the pointer to the 4 blocks
and this contains the pointer to the 5 block then 5 block contains the pointer to the next block and
this process is repeated at the last .
Unit-4
2 Marks

1.Report most schemes for allocation of disk space.


2.Sketech a case where sequential access method is better than direct access method in the
file storage information.
3.Teach the neccessity of mounting a file system.
4.Use the following ACL "USER1 RWXR_XR_ _FILE1" and discover the operations that
can be performed on the file by the users.
5.Teach the need of partitioning a raw disk.
6.Demonstrate the need of mount point while mounting a file system.
7.Sketech a case where direct access method is better than sequential access method in the
file storage information.
Unit-5
4 marks
1. Discuss about magnetic tapes.
2. Summarize about magnetic tapes.
Magnetic drums, magnetic tape and magnetic disks are types of magnetic memory. These
memories use property for magnetic memory. Here, we have explained about magnetic tape in
brief.

Magnetic Tape memory :


In magnetic tape only one side of the ribbon is used for storing data. It is sequential memory
which contains thin plastic ribbon to store data and coated by magnetic oxide. Data read/write
speed is slower because of sequential access. It is highly reliable which requires magnetic tape
drive writing and reading data.

The width of the ribbon varies from 4mm to 1 Inch and it has storage capacity 100 MB to 200
GB.
Advantages :
1. These are inexpensive, i.e., low cost memories.
2. It provides backup or archival storage.
3. It can be used for large files.
4. It can be used for copying from disk files.
5. It is a reusable memory.
6. It is compact and easy to store on racks.
Disadvantages :
● Sequential access is the disadvantage, means it does not allow access randomly or
directly.
● It requires caring to store, i.e., vulnerable humidity, dust free, and suitable environment.
● It stored data cannot be easily updated or modified, i.e., difficult to make updates on data.

3. Discuss about the structure of modern disk drives.


4. Summarize the structure of modern disk drives.

Disk drives are a type of external storage device used in computers to store data and programs.
They consist of spinning disks, called platters, that are coated with a magnetic material.
A read/write head is used to read and write data to the disk. In this essay, we will explore the
organization and structureof disk drives, including the components that make up a disk drive,
how data is organized on the disk,and the performance characteristics of disk drives

5. Discuss about rotational latency in disk scheduling.


6. Summarize rotational latency in disk scheduling.
Rotational latency in disk scheduling is the time it takes for the desired sector of the disk to
rotate into a position where it can be accessed by the read/write head. It is one of the factors that
affect the disk performance, along with seek time and transfer time. Rotational latency depends
on the rotational speed of the disk, measured in revolutions per minute (RPM). The faster the
disk rotates, the lower the rotational latency. The average rotational latency for a disk is half the
amount of time it takes for the disk to make one revolution1.

To summarize, rotational latency is the delay between the arrival of a disk request and the start of
data transfer. It is determined by the angular position of the disk and the rotational speed of the
disk. Rotational latency can be reduced by using disks with higher RPM or by scheduling disk
requests in a way that minimizes the disk arm movement.
7. Discuss about the steps in disk initialization.
8. Summarize the steps in disk initialization.
Disk initialization is the process of preparing a disk for use by Windows. It involves
assigning a drive letter, a partition style, and a file system to the disk. Here are the
general steps for disk initialization:

● Open Disk Management with administrator permissions. You can do this by


typing “disk management” in the search box on the taskbar, right-clicking Disk
Management, and selecting Run as administrator.
● Right-click the disk you want to initialize and select Initialize Disk. If the disk is
listed as Offline, first right-click it and select Online.
● In the Initialize Disk dialog box, choose the partition style (GPT or MBR) and click
OK. The disk status will change to Initializing and then Online.
● Right-click the unallocated space on the disk and select New Simple Volume.
Follow the wizard to specify the size, drive letter, file system, and volume label for
the new partition.
● Click Finish to complete the disk initialization process.

9. Compare and contrast paging and swapping.


10. Distinguish between paging and swapping.

11. Discuss about the goals of parallelism in a disk system.


12. Summarize the goals of parallelism in a disk system.
Parallelism in a disk system is a technique that allows multiple operations to be
performed on different parts of the data stored on multiple disks simultaneously. The
goals of parallelism in a disk system are:
To reduce the retrieval time of data from the disk by partitioning the data and processing
it in parallel with each partition.
To speed up the execution of queries by decomposing them into lower-level operations
such as scan, join, sort, and aggregation, and executing them concurrently on different
processors and disks.
To increase the throughput and scalability of the system by handling more concurrent
requests efficiently using shared-disk architecture or distributed memory systems.

13. Discuss about removable media.


14. Summarize about removable media,

What is removable media?Removable media is any type of storage device that can be removed
from a computer while the system is running. Removable media makes it easy for a user to move
data from one computer to another.

Types of removable media

● CDs
● DVDs
● Blu-ray discs
● USB drives
● SD cards
● floppy disks
● magnetic tape

Advantages and disadvantages of removable media

In a storage context, the main advantage of removable media is that it can deliver the fast data
backup and recovery times associated with storage area networks. Removable storage media can
also help organizations meet corporate backup and recovery requirements because it is
portable.Portability is also one of the technology's main drawbacks. Ransomware attacks can be
transferred from computer to computer by removable media such as a USB drive

15. Discuss the performance issues to be considered in tertiary-storage.


16. Summarize the performance issues to be considered in tertiary-storage
● Access latency: Tertiary storage devices have longer access times than primary and
secondary storage devices, which can affect the performance of applications that require
frequent or real-time data access. For example, tape libraries and optical jukeboxes have
access latencies ranging from minutes to hours, depending on the type and speed of the
device12.
● Transfer rate: Tertiary storage devices have higher transfer rates than primary and
secondary storage devices, which can improve the performance of applications that
require large amounts of data transfer. For example, cloud storage services can offer
transfer rates up to several gigabits per second, depending on the bandwidth and network
conditions3.
● Storage capacity: Tertiary storage devices have larger storage capacities than primary
and secondary storage devices, which can enable the long-term retention and backup of
data that is not frequently accessed. For example, tape libraries and optical jukeboxes can
store terabytes or petabytes of data, while cloud storage services can store petabytes or
exabytes of data13.
● Cost: Tertiary storage devices are generally cheaper than primary and secondary storage
devices, which can reduce the cost of data storage and maintenance. For example, tape
libraries and optical jukeboxes are much cheaper than hard disk drives or solid state
drives, while cloud storage services are usually free or low-cost for users13.
● Accessibility: Tertiary storage devices are less accessible than primary and secondary
storage devices, which can pose challenges for data recovery and management. For
example, tape libraries and optical jukeboxes may require special equipment or software
to read or write data, while cloud storage services may require internet connection or
authentication to access data
Unit-5
2 Marks

1.Demonstrate the goal of swap space.


The purpose of swap space is to free up physical memory (RAM) so that it can be used for
more important tasks, while still allowing the system to maintain the illusion of having more
memory than it actually has.Disadvantages of NAS: Performance depends on the protocol Slow
down for video application or multiple large files It is file oriented Increased LAN traffic The
file transfer speed is not as fast as DAS Limited scalability Additional Input-output processing
2.Relate the drawback of network-attached storage systems.
.Disadvantages of NAS: Performance depends on the protocol Slow down for video application
or multiple large files It is file oriented Increased LAN traffic The file transfer speed is not as
fast as DAS Limited scalability Additional Input-output processing
3.Demonstrate the selection of a disk scheduling algorithm.
8.Relate the selection of a disk scheduling algorithm.
To demonstrate the selection of a disk scheduling algorithm, we need to consider the
following factors:

● The number and order of disk access requests


● The current position of the disk head
● The direction of the disk head movement
● The objective of the disk scheduling algorithm

Different disk scheduling algorithms have different advantages and disadvantages,


depending on the workload and the system requirements.

4.Demonstrate the use of ECC in disk formatting.


ECC stands for error correction code, which is a method of detecting and correcting errors that
may occur in data storage or transmission. ECC is often used in disk formatting to ensure the
integrity and reliability of the data stored on the disk
5.Relate an example for optical disk technology.
Or
6.show an example for optical disk technology.
Digital versatile disks is an example of an optical disk.
● Compact disks (CD),
● digital versatile/video disks (DVD)
● Blu-ray disks
These are currently the most commonly used forms of optical disks. These disks are generally
used to: Distribute software to customers. Store large amounts of data such as music, images and
videos.a
7.Demonstrate various disk damaging scenarios.
● Physical shock or impact: Dropping, bumping, or shaking a hard disk can lead to
a head crash, damaging the magnetic surface and causing data loss. Handle
disks with care, avoiding movement during operation.
● Heat or fire: Excessive heat can cause malfunctions, damaging electronic
components, the motor, or the platter. Fire can melt or burn the disk irreparably.
Keep disks in a cool place, away from direct sunlight or flames.
● Water or moisture: Exposure to water can cause short circuits or corrosion,
affecting electrical signals and magnetic properties. Keep disks in a dry place,
avoiding spills or submersion in water.
● Virus or malware attack: Infections can corrupt data or firmware, leading to data
loss or manipulation. Use reliable antivirus software and regularly scan disks for
threats.
● Human error or mishap: Accidental deletion, formatting, or using incorrect
commands can damage disks. Backup data frequently, exercise caution, and use
proper tools when handling disks.

You might also like