CS8493 Operating Systems Question Bank
CS8493 Operating Systems Question Bank
10. What are various objectives and functions of the Operating System? (OR) What is the
main purpose of an OS?
a) It acts as an intermediately between a user of a computer and the computer hardware
b) It controls and coordinates the use of the hardware among the various application
programs for the various users.
c) It provides the means for the proper use of hardware, software and the data in the
operation of a computer system.
Functions of OS:
Program execution
I/O operations
File-system manipulation
Communications
Resource allocation
Protection
11. What is the function of system program? Write the name of the categories in which
the system programs can be divided.
System programs provide a convenient environment for program development and execution.
System programs provide basic functioning to users so that they do not need to write their own
environment for program development (editors, compilers) and program execution (shells)
Functions:- Loading, linking, compiling etc.
Categories:-
File management
Status information
File modification
Programming-language support
Program loading and execution
Communications
12. Define schedulers.
A process migrates between the various scheduling queues through out its life time. The OS must
select processes from these queues in some fashion. This selection process is carried out by a
scheduler.
13. What are the use of job queues, ready queues and device queues?
As a process enters a system they are put in to a job queue. These queues consist of all jobs in
the system. The processes that are residing in main memory and are ready and waiting to
execute are kept on a list called ready queue. The list of processes waiting for particular I/O
devices kept in the device queue.
15. How can a user program disturb the normal operation of the system?
Issuing illegal I/O operation.
By accessing memory locations with in the OS itself.
Refusing to relinquish the CPU.
17. What does the CPU do when there are no user programs to run?
The CPU will always do processing. Even though there are no application programs running, the
operating system is still running and the CPU will still have to process.
PART B
1) Explain Operating System Structure and components.
2) (i) Discuss multiprocessor systems in detail.
(ii) Explain the purpose and importance of system calls in detail with examples.
3) Explain in detail the types of system calls provided by a typical operating system.
4) Explain the purpose of system calls and discuss the calls related to device management
and communications in brief.
5) Write notes on handheld system and clustered system.
6) Explain the concepts of virtual machines, their implementation and benefits in details.
7) Write short notes on operating system services and components.
8) Write in detail about the real time system and multiprocessor system.
c. Modularity.
d. Convenience.
7) What is the use of inter process communication?
Inter process communication provides a mechanism to allow the co-operating process to
communicate with each other and synchronies their actions without sharing the same address
space. It is provided a message passing system.
8) Define thread.
A thread otherwise called a lightweight process (LWP) is a basic unit of CPU utilization, it
comprises of a thread id, a program counter, a register set and a stack. It shares with other
threads belonging to the same process its code section, data section, and operating system
resources such as open files and signals.
9) Differentiate a Thread form a Process.
a. Threads
Will by default share memory
Will share file descriptors
Will share file system context
Will share signal handling
Processes
Will by default not share memory
Most file descriptors not shared
Don't share file system context
Don't share signal handling
11) What are the difference b/w user level threads and kernel level threads?
User threads
User threads are supported above the kernel and are implemented by a thread library at the user
level. Thread creation & scheduling are done in the user space, without kernel intervention.
Therefore they are fast to create and manage blocking system call will cause the entire process to
block
Kernel threads
Kernel threads are supported directly by the operating system .Thread creation, scheduling and
management are done by the operating system. Therefore they are slower to create & manage
compared to user threads. If the thread performs a blocking system call, the kernel can schedule
another thread in the application for execution
12) What is the use of fork and exec system calls?
Fork is a system call by which a new process is created. Exec is also a system call, which is used
after a fork by one of the two processes to place the process memory space with a new program.
13) Define thread cancellation & target thread.
The thread cancellation is the task of terminating a thread before it has completed. A thread that
is to be cancelled is often referred to as the target thread. For example, if multiple threads are
concurrently searching through a database and one thread returns the result, the remaining
threads might be cancelled.
14) What are the different ways in which a thread can be cancelled?
Cancellation of a target thread may occur in two different scenarios:
Asynchronous cancellation: One thread immediately terminates the target thread is called
asynchronous cancellation.
Deferred cancellation: The target thread can periodically check if it should terminate, allowing
the target thread an opportunity to terminate itself in an orderly fashion.
15) Define PThreads
PThreads refers to the POSIX standard defining an API for thread creation and synchronization.
This is a specification for thread behavior, not an implementation.
16) What is critical section problem?
Consider a system consists of 'n' processes. Each process has segment of code called a critical
section, in which the process may be changing common variables, updating a table, writing a
file. When one process is executing in its critical section, no other process can be allowed to
execute in its critical section.
17) What are the requirements that a solution to the critical section problem must
satisfy?
The three requirements are
Mutual exclusion
Progress
Bounded waiting
18) Define mutual exclusion.
Mutual exclusion refers to the requirement of ensuring that no two process or threads are in their
critical section at the same time. i.e. If process Pi is executing in its critical section, then no other
processes can be executing in their critical sections.
19) Define entry section and exit section.
The critical section problem is to design a protocol that the processes can use to cooperate.
Each process must request permission to enter its critical section.
Entry Section: The section of the code implementing this request is the entry section.
Exit Section: The section of the code following the critical section is an exit section.
The general structure:
do {
entry section
critical section
exit section
remainder section
} while(1);
20) Give two hardware instructions and their definitions which can be used for
implementing mutual exclusion.
TestAndSet
boolean TestAndSet (boolean &target)
{
boolean rv = target;
target = true;
return rv;
}
Swap
void Swap (boolean &a, boolean &b)
{
boolean temp = a;
a = b;
b = temp;
}
21) What is semaphore? Mention its importance in operating system.
A semaphore 'S' is a synchronization tool which is an integer value that, apart from initialization,
is accessed only through two standard atomic operations; wait and signal.
Semaphores can be used to deal with the n-process critical section problem. It can be also used to
solve various Synchronization problems.
22) Define busy waiting and spinlock.
When a process is in its critical section, any other process that tries to enter its critical section
must loop continuously in the entry code. This is called as busy waiting and this type of
semaphore is also called a spinlock, because the process while waiting for the lock.
23) Show the mutual exclusion may be violated if the signal and wait operations are not
executed automatically
A wait operation atomically decrements the value associated with a semaphore. If two wait
operations are executed on a semaphore when its value is1, if the two operations are not
performed atomically, then it is possible that both operations might proceed to decrement the
semaphore value, thereby violating mutual exclusion
24) Define CPU scheduling.
CPU scheduling is the process of switching the CPU among various processes. CPU scheduling
is the basis of multiprogrammed operating systems. By switching the CPU among processes, the
operating system can make the computer more productive.
25) What is preemptive and nonpreemptive scheduling?
Under nonpreemptive scheduling once the CPU has been allocated to a process, the process
keeps the CPU until it releases the CPU either by terminating or switching to the waiting state.
Preemptive scheduling can preempt a process which is utilizing the CPU in between its
execution and give the CPU to another process.
26) What is a Dispatcher?
The dispatcher is the module that gives control of the CPU to the process selected by the short-
term scheduler. This function involves:
Switching context.
Switching to user mode.
Jumping to the proper location in the user program to restart that program.
27) What is dispatch latency?
The time taken by the dispatcher to stop one process and start another running is known as
dispatch latency.
28) What are the various scheduling criteria for CPU scheduling?
The various scheduling criteria are
CPU utilization
Waiting time
Throughput
Response time
Turnaround time
29) Define throughput?
Throughput in CPU scheduling is the number of processes that are completed per unit time. For
long processes, this rate may be one process per hour; for short transactions, throughput might be
10 processes per second.
30) What is turnaround time?
Turnaround time is the interval from the time of submission to the time of completion of a
process. It is the sum of the periods spent waiting to get into memory, waiting in the ready
queue, executing on the CPU, and doing I/O.
31) Define race condition.
When several process access and manipulate same data concurrently, then the outcome of the
execution depends on particular order in which the access takes place is called race condition. To
avoid race condition, only one process at a time can manipulate the shared variable.
32) Write the four situations under which CPU scheduling decisions take place
CPU scheduling decisions take place under one of four conditions:
When a process switches from the running state to the waiting state, such as for an I/O
request or invocation of the wait( ) system call.
When a process switches from the running state to the ready state, for example in
response to an interrupt.
When a process switches from the waiting state to the ready state, say at completion of
I/O or a return from wait( ).
When a process terminates.
33) Define deadlock.
A process requests resources; if the resources are not available at that time, the process enters a
wait state. Waiting processes may never again change state, because the resources they have
requested are held by other waiting processes. This situation is called a deadlock.
In a RTOS, all process are Kernel process & hence time constraints should be strictly followed.
All process/task (can be used interchangeably) are based on priority and time constraints are
important for the system to run correctly.
43) What do you meant by short term scheduler
The selection process is carried out by the short term scheduler or CPU scheduler. The
scheduler selects the process form the process in memory that is ready to execute and allocates
the CPU to the process
44) What is banker’s algorithm?
Banker’s algorithm is a deadlock avoidance algorithm that is applicable to a resource-allocation
system with multiple instances of each resource type. The two algorithms used for its
implementation are:
a. Safety algorithm: The algorithm for finding out whether or not a system is in a safe state.
b. Resource-request algorithm: if the resulting resource allocation is safe, the transaction is
completed and process Pi is allocated its resources. If the new state is unsafe Pi must wait and
the old resource-allocation state is restored.
PART-B
2) Discuss how scheduling algorithms are selected for a system.What are the criteria
considered? Explain the different evaluation Methods.
3) Discuss the different techniques used for evaluating CPU scheduling algorithms in detail.
7) Define swapping.
A process needs to be in memory to be executed. However a process can be swapped temporarily
out of memory to a backing store and then brought back into memory for continued execution.
This process is called swapping.
8) What are the common strategies to select a free hole from a set of available holes?
The most common strategies are
A. First fit B. Best fit C. Worst fit
18) What are the various page replacement algorithms used for page replacement?
FIFO page replacement
Optimal page replacement
LRU page replacement
LRU approximation page replacement
Counting based page replacement
Page buffering algorithm.
19) Differentiate between Global and Local page replacement algorithms.
Global Page Replacement Algorithm Local Page Replacement Algorithm
Allows a process to select a replacement frame Each process select form only its own set of
from the set of all frames, even if that frame is allocated frames
currently allocated to some other process
The number of frames allocated to a process can The number of frames allocated to a process
does
change since a process may happen to select only not change
frames allocated to other processes, thus
increasing the number of frames allocated to it
A process cannot control its own page-fault rate A process can control its own page-fault rate
28) Consider a logical address space of eight pages of 1024 words each, mapped onto a
physical memory of 32 frames. How many bits are there in the logical address and in the
physical address?
PART-B
1. (i) Describe the hierarchical paging technique for structuring page tables. (8)
(ii) Explain the concept of paging in detail with necessary diagrams. (8)
2. Write in detail about Segmentation
3. Write in detail about Segmentation with Paging.
4. Consider a logical-address space of eight pages of 1024 words each mapped onto a
physical memory of 32 frames.
a. How many bits are in the logical address?
b. How many bits are in the physical address?
5. Explain the use of a page table and give a brief account of how it is used.
6. Explain the segmentation with paging implemented in OS/2 32-bit IBM system. Describe
the following algorithms:
a. First fit
b. Best Fit
c. Worst Fit
7. Explain in detail the segmentation scheme.
8. Write short notes on paging system.
9. Explain how paging supports virtual memory. With a neat diagram explain how logical
address is translated into physical address.
10. Write in detail about the contiguous memory storage
11. Explain the various address translation technique in paging
12. Explain the various address translation technique in segmentation
13. Explain the principles of segmented and paging implemented in memory with a diagram.
14. Explain the various page table structures in detail.
15. Write short notes on LRU, FIFO and clock replacement strategies?
16. Explain any four page replacement algorithms in detail?
17. Consider the following segment table:
Segment Base Length
a) 0 0219 600
b) 1 2300 14
c) 2 090 100
d) 3 1327 580
e) 4 1952 96
18. What are the physical addresses for the following logical addresses?
i. 0430
ii. 110
iii. 2500
iv. 400
v. 4112
19. Write short notes on non-contiguous memory management
20. What is thrashing? Explain the working set model in detail. (MAY/JUNE 2009)
21. Given memory partitions of 100KB, 500KB, 200KB, 300KB and 600KB(in order), how
would each of the first-fit, best-fit and worst-fit algorithms place processes of212KB,
417KB, 112KB and 426KB(in order)? Which algorithm makes the most efficient use of
memory?
22. (i) Explain in briefly and compare, fixed and dynamic memory partitioning shemes.
(ii) Explain FIFO, optimal and LRU page replacement algorithms with an example
reference strings. Mention the merits and demerits of each of the above algorithms.
23. Consider the following page reference string 1,2,3,4,2,1,5,6,2,1,3,7,6,3,2,1,3,6.
24. How many page faults would occur for the following replacement algorithms, assuming
one, two, three and four frames?
i) LRU replacement
ii) FIFO replacement
iii) Optimal replacement
25. (i) Consider the following page reference string: (4) 2, 1, 0, 3, 4, 0, 0, 0, 2, 4, 2, 1, 0, 3, 2.
How many page faults would occur if the working set policy were used with a
window size of 4?
Show when each page fault would occur clearly.
(ii) What is meant by thrashing? Discuss in detail. (12)
26. Explain the concept of demand paging in detail with neat diagram
27. Why are translation look-aside buffers important> Explain the details stored in a TLB
table entry?
6) What is Directory?
The device directory or simply known as directory records information-such as name, location,
size, and type for all files on that particular partition. The directory can be viewed as a symbol
table that translates file names into their directory entries.
8) What are the most common schemes for defining the logical structure of a
directory?
The most common schemes for defining the logical structure of directory
Single-Level Directory
Two-level Directory
Tree-Structured Directories
Acyclic-Graph Directories
General Graph Directory
12) What are the structures used in file-system implementation? Define File Control
Block.
a. Several on-disk and in-memory structures are used to implement a file system
b. On-disk structure include
Boot control block
Partition block
Directory structure used to organize the files
File control block (FCB)
In-memory structure include
In-memory partition table
In-memory directory structure
System-wide open file table
Per-process open table
20) How can the index blocks be implemented in the indexed allocation scheme?
The index block can be implemented as follows
1. Linked scheme
2. Multilevel scheme
3. Combined scheme
26) State any three disadvantages of placing functionality in a device controller , rather
than in the kernel.
Three advantages:-
a.Bugs are less likely to cause an operating system crash.
b.Performance can be improved by utilizing dedicated hardware and hard-coded algorithms.
The kernel is simplified by moving algorithms out of it.
Three disadvantages:
a.Bugs are harder to fix - a new firmware version or new hardware is needed
b.Improving algorithms likewise require a hardware update rather than just kernel or device
driver update
c.Embedded algorithms could conflict with application’s use of the device, causing decreased
performance.
29) What are the information contained in a boot control block and partition control
block?
Boot control block:
Contain information needed by the system to boot an operating from that partition. If the disk
does not contain an operating system,this block can be empty.It is typically the first block of a
partition.In UFS, this is called the boot block.
Partition Control block:
Contains partition details , such as number of blocks in the partition , size of the blocks ,free
block count and free block pointers, and free FCB count and FCB pointers.
30) Define buffering.
A buffer is a memory area that stores data while they are transferred between two devices or
between a device and an application. Buffering is done for three reasons
a. To cope with a speed mismatch between the producer and consumer of a data stream
b. To adapt between devices that have different data transfer sizes
c. To support copy semantics for application I/O
44) Write three basic functions which are provided by the hardware clocks and timers.
• OSTickInit()
• OSTimeSet()
• OSTimeGet()
PART-B
1) Explain the different disk scheduling algorithms with examples.
2) Explain and compare FCFS, SSTF, C-SCAN and C-LOOK disk scheduling algorithms
with examples.
3) How do you choose a optimal technique among the various disk scheduling techniques?
Explain.
4) Write short notes on disk management.
5) Explain in detail the disk structure and implementation
6) Discuss how free space is managed by operating system?
7) Explain the process scheduling in Linux.
8) Write in detail the memory management in Linux.
9) Discuss in-detail the disk performance with suitable expressions.
10) Write short notes on swap space management.
11) Write short notes on file system in Linux.
12) Explain about the various components and salient features in Linux.
13) Write an elaborate note on RAID and RAID Levels.
14) Explain the file allocation methods.
15) i) Explain the issues in designing a file system. (8)
ii) Explain the various file directory structures. (8)
A device driver which does not offer random access to fixed blocks of data. A character
device driver must register a set of functions which implement the driver’s various file
I/O operations.
17. What is Mobile OS?
A mobile operating system (mobile OS) is an OS built exclusively for a mobile device,
such as a smartphone, personal digital assistant (PDA), tablet or other embedded mobile
OS.
18. What is iOS?
iOS is a mobile operating system created and developed by Apple Inc. exclusively for its
hardware. It is the operating system that presently powers many of the company's mobile
devices, including the iPhone, iPad, and iPod Touch. It is the second most popular mobile
operating system globally after Android.
19. List the services available in iOS.
i) Cocoa Touch
ii) Media layer
iii) Service layer
iv) Core OS layer
20. List the features of iOS.
i) System fonts
ii) Folders
iii) Notification center
iv) Accessibility
v) Multitasking
vi) Switching Applications
vii) Task completion
viii) Background audio
ix) Voice over IP
x) Background Location
xi) Push notification
21. List the advantages of iOS
Best gaming experience.