Final Exam OS
Final Exam OS
CSEC-227
Chowdhury Sajadul Islam
Now, suppose
4. The final condition for deadlock is the circular wait. Let’s assume the
process P1 is waiting for a resource R2. Now that particular resource R2
is already held by a process P2. Process P2 is waiting for the resource,
held by the next process. This will continue until the last process is
waiting for the resource held by the first process.
The circular wait condition creates a circular chain and puts all the
processes in a waiting state.
Let’s take a practical example to understand this issue. Jack and Jones
share a bowl of soup. Both of them want to drink the soup from the
same bowl and use a single spoon simultaneously, which is not feasible.
Before starting the execution, the process does not know how many
resources would be required to complete it. In addition to that, the bus
time, in which a process will complete and free the resource, is also
unknown.
Basically, in the deadlock avoidance, the OS will try not to be in cyclic wait
condition. If the OS can allocate all the requested resources to the process
without causing the deadlock in the future, that is known as a safe state. And if
the OS can’t allocate all the requested resources without causing the deadlock in
the future, that is known as an unsafe state.
5.1. Resource Allocation Graph (RAG) Algorithm
Using RAG, it’s possible to predict the occurrence of deadlock in an OS.
The resource allocation graph is the pictorial view of all allocated resources,
available resources, and OS’s current state. We can figure out how many
resources are allocated to each process and how many resources will be needed
in the future. Thus, we can easily avoid the deadlock.
In this method, the OS assumes that a deadlock will occur in the future. So it
runs a deadlock detection mechanism with a certain interval of time, and when
it detects the deadlock, it starts a recovery approach.
The main difference between a RAG and a wait-for graph is the number of
vertices each graph contains. A RAG graph has two vertices: resource and
process. A wait-for graph has one vertex: process.We can also create a wait-for
graph using a RAG:
Now, as soon as the OS detects a deadlock, it’ll start the recovery method. There
are two main approaches to recover from a deadlock:
One disadvantage of this approach is there is a possibility that the same process
may become the victim of preemption. In this condition, the process will be
stuck in starvation.
Another approach in which the OS will roll back to a certain safe state where
deadlock hasn’t occurred. But for this, OS has to maintain some logs up to which
it was in a safe state.
One of the disadvantages of this method is that there is no decision parameter to
select the order of the rollback of the processes.
One pessimist approach is to abort all the deadlocked processes. This is the
simplest way of breaking the cycle to recover from the deadlock, but it’s also the
most costly way of handling deadlock. In this method, we kill all the processes,
and the OS will either discard them or restart a portion of the processes later as
per requirement.
Also, we can abort the one process at a time till we eliminate deadlock from the
system.
In this method, the OS kills one process at a time, and it selects the process
which has done the least work. After that, it runs the deadlock detection
algorithm to verify if the deadlock is recovered. If it is not recovered, then it will
keep killing the processes till the deadlock is eliminated.
The OS has to map the logical address space to the physical address
space and manage memory usage between the processes as appropriate,
for example, via paging, segmentation, and the use of virtual memory.
In computer systems design, the concept of memory hierarchy is an
enhancement to organize the computer's memory such that access time
to memory is minimized. Memory hierarchy was developed based on a
software program's behavior known as locality of references.The figure
below depicts the different levels of memory hierarchy :
r Secondary Memory
This level is comprised of peripheral storage devices which are
accessible by the processor via I/O Module.
r Primary Memory
This level is comprised of memory that is directly accessible by the
processor. We can infer the following characteristics of Memory
Hierarchy Design from the above figure:
Capacity:
As we move from top to bottom in the hierarchy, the capacity increases.
Access Time:
This represents the time interval between the read/write request and
the availability of the data. As we move from top to bottom in the
hierarchy, the access time increases.
Chowdhury Sajadul Islam-----------UU------------- CSEC-227
Memory Hierarchy Design and its Characteristics
As we move up the hierarchy - from bottom to top - the cost per bit
increases i.e. internal memory is costlier than external memory.
The Single User Contiguous is the first and simplest scheme of memory
management. In such a case, the memory manager allocates all the
available memory to a unique program. So, the memory manager can
only allocate a new program if it already deallocated the previous one.
The Fixed Partitions scheme divides the available memory into fixed-
size partitions, thus allowing multiple and simultaneous allocations of
programs. These memory partitions are defined on the computer
system startup and aren’t modified unless it reboots. The memory
manager, in this case, adopts a partition memory table to track the
memory and determine on which partition a specific program should
be allocated.
Chowdhury Sajadul Islam-----------UU------------- CSEC-227
Memory management
The Dynamic Partitions scheme divides the computer memory into
dynamically specified partitions, avoiding memory-wasting scenarios. A
partition memory table is also required here. However, this scheme enables the
memory manager to modify the memory table lines (partitions) during the
execution of a computer system. Memory management schemes with partitions
demand a strategy to allocate programs. Let’s see two of them:
First Fit Allocation: The memory manager allocates a program on the first
found partition with sufficient memory
Best Fit Allocation: The memory manager finds the best partition to allocate a
program. The best fit refers to allocating a program in the smallest free
partition possible (fixed partitions) or maintaining the maximum free memory
in a contiguous partition (dynamic partitions)
All the schemes used in early systems, however, have a common problem: they
store the entire program in the main memory and can’t execute a program
bigger than the available physical memory. This problem boosted the
development of new memory management schemes based on virtual memory.
Segmentation
The memory blocks that are allocated to processes are divided into
segments of different sizes to fit the varying memory requirements of
each process. The segments do not need to be stored continuously
across a fixed address space, and they can be moved in and out of
memory as required.
Chowdhury Sajadul Islam-----------UU------------- CSEC-227
Memory Allocation
The OS tracks the allocation of memory for each process using a
segment table, which records where each segment required for a process
is physically located.
Virtual memory
The operating system can extend the limited physical space of memory
by using other storage in the computer. For example, a hard disk can act
as virtual memory: the operating system can swap parts of a process
that are not currently in use from the main memory into an allocated
space on the hard disk, then swap it back into main memory when it is
needed.
This has the benefit of extending the memory available, and it might appear that
swapping instructions and data out of main memory into secondary storage
means that a vast quantity of memory is potentially available. However,
accessing a secondary storage medium, such as the hard drive, is considerably
slower than accessing main memory, therefore it may slow down the computer
system’s performance.
Chowdhury Sajadul Islam-----------UU------------- CSEC-227
Requirements of Memory Management System
Memory management keeps track of the status of each memory location,
whether it is allocated or free. It allocates the memory dynamically to the
programs at their request and frees it for reuse when it is no longer needed.
Memory management is meant to satisfy the following requirements:
When a program gets swapped out to a disk memory, then it is not always
possible that when it is swapped back into main memory then it occupies the
previous memory location, since the location may still be occupied by another
process. We may need to relocate the process to a different area of memory.
Thus there is a possibility that program may be moved in main memory due to
swapping.
Chowdhury Sajadul Islam-----------UU------------- CSEC-227
Requirements of Memory Management System
After loading of the program into main memory, the processor and the operating
system must be able to translate logical addresses into physical addresses.
Branch instructions contain the address of the next instruction to be executed.
Data reference instructions contain the address of byte or word of data
referenced.
This concept has an advantage. For example, multiple processes may use the
same system file and it is natural to load one copy of the file in main memory
and let it shared by those processes. It is the task of memory management to
allow controlled access to the shared areas of memory without compromising
the protection. Mechanisms are used to support relocation supported sharing
capabilities.
1. All extents are of the same size, and the size is predetermined.
2. Extents can be of any size and are allocated dynamically.
3. c. Extents can be of a few fixed sizes, and these sizes are
predetermined.
*. The free-space list pointer could be stored on the disk, perhaps in several
places.
R1 57 77
R2 300 95
R3 250 25
R4 88 28
R5 85 100
R6 110 90
R7 299 50
R8 300 77
R9 120 12
R10 212 2
Repeat the preceding question, but this time batch requests that have
deadlines occurring within 75 milliseconds of each other.
Answer: The batches are as follows: Batch 1: (R1), Batch 2: (R4, R5,
R6, R9), Batch 3: (R10), Batch 4: (R2, R3, R7, R8)
For each of these I/O scenarios, would you design the operating system
to use buffering, spooling, caching, or a combination? Would you use
polled I/O, or interrupt-driven I/O? Give reasons for your choices.