Operating System Short
Operating System Short
Ans: Sharing the processor, when two or more programs reside in memory at
the same time, is referred as multiprogramming. Multiprogramming assumes a
single shared processor. Objectives of multiprogramming are following
Q2. Explain long term and short term Scheduler? OR Differentiate between
Short Term and Long Term Scheduler?
Ans:
1
Q3. What is Page fault? Under what circumstances do page faults occur?
Ans:
Ans: A thread is a lightweight process and forms the basic unit of CPU
utilization. A process can perform more than one task at the same time by
including multiple threads.
A thread has its own program counter, register set, and stack
A thread shares resources with other threads of the same process: the
code section, the data section, files and signals.
2
Q6. Write the differences between User-level and Kernel-level threads?
Ans:
Ans:
3
signal () operation. wait () operation is also called P, sleep, or Down operation,
and signal () operation is also called V, wake-up, or up operation.
Ans: Demand Paging is a technique in which a page is usually brought into the
main memory only when it is needed or demanded by the CPU. Initially, only
those pages are loaded that are required by the process immediately.
Ans: In some cases when initially no pages are loaded into the memory, pages
in such cases are only loaded when are demanded by the process by
generating page faults. It is then referred to as Pure Demand Paging.
Q11. Define Arrival time, Burst time, Completion time, Waiting time,
Turnaround time and Response time?
Ans:
Arrival Time – Time at which the process arrives in the ready queue.
Turn Around Time – Time Difference between completion time and arrival
time.
Waiting Time – Waiting time is the total time spent by the process in the ready
state waiting for CPU.
Response Time – Response time is the time spent when the process is in the
ready state and gets the CPU for the first time.
Response time =
Time at which the process gets the CPU for the first time - Arrival time
4
Ans: Processor affinity or CPU pinning, enables binding and unbinding of a
process or multiple processes to a specific CPU core in a way that the
process(es) will run from that specific core only.
Q13. Differentiate between I/O bound processes and CPU bound processes?
Ans:
Ans: When CPU switches to another process, the system must save the state of
the old process and load the saved state for the new process this task is known
as context switching.
Ans:
Benefits Drawbacks
Short quantum allows many Short quantum will cause too many
processes to circulate through the process switches and will lower CPU
processor quickly, each getting a efficiency.
brief chance to run. It provide better
5
response time for short interactive
processes.
Ans:
Ans:
6
Q18. Define seek time and latency time/Rotational Lantency?
Ans: Seek Time: Seek time is the time taken to locate the disk arm to a
specified track where the data is to be read or written.
Rotational Latency: Rotational Latency is the time taken by the desired sector
of disk to rotate into a position so that it can access the read/write heads .
Ans: The image shown below, elaborates how the file system is divided in
different layers
7
Q20. What are the advantages and disadvantages of Contiguous allocation?
Ans:
Advantages Disadvantages
1. It is simple to implement. 1. The disk will become
2. We will get Excellent read fragmented.
performance. 2. It may be difficult to have a file
3. Supports Random Access into grow.
files.
8
Ans: Effective access time is the average access time to memory items, where
some items are cached in fast storage and other items are not cached.
Ans: In year 1970, Belady, Nelson and Shedler discovered that in FIFO page
replacement, certain page reference patterns actually cause more page faults
when the number of page frames allocated to a process is increased. This
phenomenon is called the FIFO anomaly or Belady’s anomaly.
Ans: The dispatcher is the module that gives a process control over the CPU
after it has been selected by the short-term scheduler. Responsibilities of a
dispatcher are following:
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that
program
Ans: Dispatch latency is the time taken by the dispatcher to stop one process
and start another. The lower the dispatch latency, the more efficient the
software for the same hardware configuration.
Ans: Critical Section is the portion of the code in the program where shared
variables or resources are accessed and/or updated by various processes.
Examples:
9
Ans:
Ans: These three are the conditions that a solution to the critical section
problem must satisfy.
Mutual Exclusion
Mutual exclusion implies that only one process can be inside the critical
section at any time. If any other processes require the critical section,
they must wait until it is free.
Progress
10
Progress means that if a process is not using the critical section, then it
should not stop any other process from accessing it. In other words, any
process can enter a critical section if it is free.
Bounded Waiting
Bounded waiting means that each process must have a limited waiting
time. It should not wait endlessly to access the critical section.
Q30. What is a Job queue, ready queue and device queue? OR What are
various scheduling queues?
Job queue − This queue keeps all the processes in the system.
Ready queue − This queue keeps a set of all processes residing in main
memory, ready and waiting to execute. A new process is always put in this
queue.
Device queues − The processes which are blocked due to unavailability of an
I/O device constitute this queue.
Q31. Explain how Demand Paging affects the performance of a computer
system?
Ans: If one access out of 1000 causes a page fault, the effective access time is
8.2 microseconds. The computer will be slowed down by a factor of 40 because
of demand paging. Therefore, it is important to keep the page-fault rate low in
a demand paging system. Otherwise, the effective access time increases
slowing process execution.
Q32. Is it a wise approach to reserve an array in zero-capacity buffer?
Ans: No it is not a wise approach because we can’t store data in zero capacity
buffer. So implementing array does not make sense.
Ans: Safe State − If the system can allocate resources to the process in such a
way that it can avoid deadlock. Then the system is in a safe state.
Unsafe State − If the system can’t allocate resources to the process safely, then
the system is in an unsafe state.
11
Deadlock State − If a process is in the waiting state and is unable to change its
state because the resources required by the process is held by some other
waiting process, then the system is said to be in Deadlock state.
Ans: Multithreading is a CPU (central processing unit) feature that allows two
or more instruction threads to execute independently while sharing the same
process resources.
12
Q37. What are the Time Sharing Systems?
Ans:
Ans:
13
Q40. Differences between logical and physical addresses?
Ans:
Q41. What are necessary conditions which can lead to a deadlock situation in
a system?
Hold and wait − If one process holding a resource and waiting for another
resource that is held by another process. Then it leads to a deadlock.
Circular wait − One process is waiting for the resource, which is held by the
second process, which is also waiting for the resource held by the third process
14
etc. This will continue until the last process is waiting for a resource held by the
first process. This creates a circular chain.
Q42. What are the states of a process? OR List down the process states?
Ans:
15
scheduling. Any other process which enters the queue has to wait until the
current process finishes its CPU cycle.
Ans:
Ans: Starvation is the problem that occurs when high priority processes keep
executing and low priority processes get blocked for indefinite time. A steady
stream of higher-priority methods will stop a low-priority process from ever
obtaining the processor.
Ans: A computer can address more memory than the amount physically
installed on the system. This extra memory is actually called virtual
memory and it is a section of a hard disk that's set up to emulate the
computer's RAM.
17
Ans: Garbage collection (GC) is a dynamic approach to automatic memory
management and heap allocation that processes and identifies dead memory
blocks and reallocates storage for reuse. The primary purpose of garbage
collection is to reduce memory leaks.
Ans: The main purpose of paging is to allow the physical address space of the
process to be non-contiguous, so that a process can be allocated memory
anywhere wherever a free frame is available in main memory. Purpose of Page
table is to store the mapping between physical and logical addresses.
Ans: A spinlock is a lock that causes a thread trying to acquire it to simply wait
in a loop ("spin") while repeatedly checking whether the lock is available. Since
the thread remains active but is not performing a useful task, the use of such a
lock is a kind of busy waiting.
18
Q58. Which of the following scheduling algorithms can lead to starvation?
FIFO, Shortest Job First, Priority, Round Robin?
Ans:
Internal fragmentation.
External fragmentation
Ans:
19
Ans: Hit ratio is defined as the percentage of times that a page number is
found in the associative registers.
Ans:
Ans: Batch systems are those systems in which user who is using a batch
operating system do not interact with the computer directly. There is an
operator which takes the jobs and create groups of the jobs that perform
similar functions. These job groups are treated as a batch and executed
simultaneously.
Ans: When a process creates a new process, the identity of the newly created
process is passed to its parent. When a parent process is terminating, then all
of its children process is also terminated. This phenomenon is known as
"Cascading Termination" and is normally initiated by the operating system.
20
Ans: Round-robin is a CPU scheduling algorithm in which each ready task runs
turn by turn only in a cyclic queue for a limited time slice. This algorithm also
offers starvation free execution of processes.
Ans:
The I/O device (controller) is busy transferring data from the device buffer to
the device. It goes from idle to transferring. This is the peak for I/O device. It
goes back to idle when the transfer is done, until the next request.
The CPU curve shows a peak when the transfer is done because the CPU is
notified by the device (through an interrupt).
Ans: When one program is dependent on some other program. In such a case,
rather than loading all the dependent programs, CPU links the dependent
programs to the main executing program when its required. This mechanism is
21
known as Dynamic Linking. Dynamic linking refers to the linking that is done
during load or run-time and not when the exe is created.
Q71. With what type of fragmentation does Paging and Segmentation suffers
from?
Ans:
Ans: Preemption as used with respect to operating systems means the ability
of the operating system to preempt (that is, stop or pause) a currently
scheduled task in favour of a higher priority task.
Ans:
Ans: ‘fork()’ system call is used to create processes. It takes no arguments and
returns a process ID. The purpose of the fork() is to create a new process,
which becomes the child process of the caller. After a new child process is
22
created, both processes will execute the next instruction following the fork()
system call.
Q76. Suppose there is an organization which hired a person for dividing tasks
among other persons which multiprocessing environment does this
organization depict?
23
Q78. FIFO and LRU both use previous information in page replacement
policy. How is one different from another then?
Ans: In FIFO When a page needs to be replaced the oldest page which is at the
front of the queue is selected for removal. Where as in LRU, whenever page
replacement happens, the page which has not been used for the longest
amount of time is replaced.
Ans: Concurrency − Concurrency is when two or more tasks can start, run, and
complete in overlapping time periods. It doesn't necessarily mean they'll ever
both be running at the same instant. For example, multitasking on a single-
core machine.
Parallelism is when tasks literally run at the same time, e.g., on a multicore
processor.
Q80. Mention at least 4 system calls when you enter a command that copies
a file from one path to another?
Ans:
Windows Linux
CreateFile() open()
ReadFile() read()
WriteFile() write()
CloseHandle() close()
Ans: In Peterson's algorithm, a process will never wait longer than one turn for
entrance to the critical section that’s why Peterson solution is not violating
bounded-wait.
Q82. Why do we call a program passive entity and a process active entity?
24
Ans: Program is a passive entity as it resides in the secondary memory such as
a file containing a list of instructions stored on disk. Whereas process is an
active entity as it is created during execution and loaded into the main
memory.
Ans:
Q84. Why SJF can’t be used in real-time environment when you don’t have
execution history of the programs?
Ans:
Q86. Suppose that we have free segments with sizes 6, 17, 25, 14, and 19.
Place a program with size 13kb in the free segment using first-fit, best-fit and
worst fit?
Ans:
Q87. How to implement hold and wait which can ensure that a deadlock will
not occur?
Ans: Allocate all required resources to the process before the start of its
execution, this way hold and wait condition is eliminated and deadlock will not
occur. However, its Practical implementation is not possible because a process
can't determine necessary resources initially.
Ans: Micro-Kernel: This structure designs the operating system by removing all
non-essential components from the kernel and implementing them as system
and user programs. This result in a smaller kernel called the micro-kernel.
Layered OS: In this structure the OS is broken into number of layers (levels)
each of these layers performs some kind of functionality. This simplifies the
debugging process and increases modularity.
26
fault rate exceeds the upper limit, more number of frames can be allocated to
the process.
Q90. Page table can be placed either in CPU registers or main memory. What
will be the criteria to place the page table in CPU registers?
Ans: A file system is a process that manages how and where data on a storage
disk, typically a hard disk drive (HDD), is stored, accessed and managed. It is a
logical disk component that manages a disk's internal operations as it relates to
a computer and is abstract to a human user.
Ans: An application program interface (API) is code that allows two software
programs to communicate with each other. An API defines the correct way for
a developer to request services from an operating system (OS) or other
application.
27
Q93. What are the Deadlock Characterization?
Ans:
Ans:
Symmetric Multiprocessing
Asymmetric Multiprocessing
Ans: In certain situations the page tables could become large enough that by
paging the page tables, one could simplify the memory allocation problem by
ensuring that everything is allocated as fixed-size pages as opposed to variable-
sized chunks and also enable the swapping of portions of page table that are
not currently used.
Ans: If there is a cycle in the graph and each resource has only one instance,
then there is deadlock. In this case, a cycle is a necessary and sufficient
condition for deadlock. If there is a cycle in the graph, and each resource has
more than one instance, there may or may not be deadlock.
28
Ans: Processor affinity, or CPU pinning or “Cache Affinity”, enables the binding
and unbinding of a process or a thread to a central processing unit (CPU) or a
range of CPU’s so that the process or thread will execute only on the
designated CPU or CPU’s rather than any CPU.
Ans:
Ans: Banker’s algorithm is used to avoid deadlock and allocate resources safely
to each process in the computer system. It is named as Banker's algorithm on
the banking system where a bank never allocates available cash in such a
manner that it can no longer satisfy the requirements of all of its customers.
Ans: Thread Control Block (TCB) is a data structure in the operating system
kernel which contains thread-specific information needed to manage it. The
TCB is the manifestation of a thread in an operating system. An example of
information contained within a TCB is:
29
Written by
M.Uzair
30