CT2 QB
CT2 QB
Suppose that the processes arrive in the order: P1, P2, P3 Gantt Chart:
Let us assume that there are three processes P1, P2 and P3. There are three
different resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to
P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts
its execution since it can't complete without R2. P2 also demands for R3
which is being used by P3. P2 also stops its execution because it can't
continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.
In this scenario, a cycle is being formed among the three processes. None
of the process is progressing and they are all waiting. The computer
becomes unresponsive since all the processes got blocked.
•
Q6. Describe any four condition for deadlock.
1. Mutual exclusion: Only one process at a time can use non-sharable resource.
2. Hold and wait: A process is holding at least one resource and is waiting to
acquire additional resources held by other processes.
3. No pre-emption: A resource can be released only voluntarily by the process
holding it after that
process completes its task.
4. Circular wait: There exists a set {P0, P1, …, P0} of waiting processes such
that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–
1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held by P0.
Mutual Exclusion
No Preemption
Circular Wait
One way to avoid circular wait is to number all resources, and to require that
processes request resources only in strictly increasing ( or decreasing ) order.
In other words, in order to request resource Rj, a process must first release all Ri
such that i >= j. One big challenge in this scheme is determining the relative
ordering of the different resources
Most prevention algorithms have poor resource utilization, and hence result in
reduced throughputs. Instead, we can try to avoid deadlocks by making use prior
knowledge about the usage of resources by processes including resources
available, resources allocated, future requests and future releases by processes.
Most deadlock avoidance algorithms need every process to tell in advance the
maximum number of resources of each type that it may need. Based on all this
info we may decide if a process should wait for a resource or not and thus avoid
chances for circular wait.
A file can be defined as a data structure which stores the sequence of records.
Files are stored in a file system, which may exist on a disk or in the main memory.
Files can be simple (plain text) or complex (specially-formatted).
1.Name
Every file carries a name by which the file is recognized in the file system. One
directory cannot have two files with the same name.
2.Identifier
Along with the name, Each File has its own extension which identifies the type
of the file. For example, a text file has the extension .txt, A video file can have
the extension .mp4.
3.Type
In a File System, the Files are classified in different types such as video files,
audio files, text files, executable files, etc.
4.Location
In the File System, there are several locations on which, the files can be stored.
Each file carries its location as its attribute.
5.Size
The Size of the File is one of its most important attribute. By size of the file, we
mean the number of bytes acquired by the file in the memory.
6.Protection
The Admin of the computer may want the different protections for the different
files. Therefore each file carries its own set of permissions to the different group
of Users.
Every file carries a time stamp which contains the time and date on which the file
is last modified.
linked Allocation:
In this method, each file occupies disk blocks scattered anywhere on the disk.
It is a linked list of allocated blocks.
When space has to be allocated to the file, any free block can be used from the
disk and system makes an entry in directory.
Directory entry for allocated file contains file name, a pointer to the first
allocated block and last allocated block of the file.
The file pointer is initialized to nil value to indicate empty file.
A write to a file, causes search of free block.
After getting free block data is written to the file and that block is linked to the
end of the file.
To read the file, read blocks by following the pointers from block to block
starting with block address specified in the directory entry.
For example, a file of five blocks starting with block 9 and continue with block
16,then block 1,then block 10 an finally block 25.each allocated block contains a
pointer to the next block.
Fixed Partitioning
Fixed (or static) partitioning is one of the earliest and simplest memory
management techniques used in operating systems. It involves dividing the main
memory into a fixed number of partitions at system startup, with each partition
being assigned to a process. These partitions remain unchanged throughout the
system’s operation, providing each process with a designated memory space. This
method was widely used in early operating systems and remains relevant in
specific contexts like embedded systems and real-time applications. However,
while fixed partitioning is simple to implement, it has significant limitations,
including inefficiencies caused by internal fragmentation.
A page fault is an error that occurs when a program attempts to access data that
is not currently in the main memory or random access memory (RAM).
Page faulting is an issue that affects all modern operating systems, including
Linux. A page fault typically occurs when a process attempts to access memory
in a virtual address space that it does not own.
Example:
Imagine you are writing an application with different sections: one for
instructions, one for user data, and one for temporary data (stack). In
segmentation, these sections are separated into different segments, each with its
own memory space.
Each of these segments can grow or shrink independently, and the operating
system manages them separately. This helps in protecting the program’s data
from being overwritten, as each segment has specific permissions and memory
allocation.
1. Page Request: When a program tries to access a specific part of data or code
that is not currently in memory, the operating system checks if the page (the
specific part) is already loaded
2. Page Fault: If the requested page is not in memory, a page fault occurs,
which is an alert to the operating system that the requested data is missing.
3. Loading the Page: The OS locates the page in the storage (e.g., hard disk or
SSD) and loads it into memory.
4. Resume Execution: Once the page is loaded, the program resumes from
where it left off, now able to access the requested page.