0% found this document useful (0 votes)
11 views

CT2 QB

Uploaded by

luxurypointhub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

CT2 QB

Uploaded by

luxurypointhub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Q1. Explain the pre-emptive and non-preemptive type of scheduling.

Pre-emptive Scheduling:-Even if CPU is allocated to one process, CPU can


be preempted to other process if other process is having higher priority or some
other fulfilling criteria.
• Throughput is less
• Only the processes having higher priority are scheduled.
• It doesn’t treat all processes as equal.
• Algorithm design is complex.

Circumstances for preemptive


• Process switch from running to ready state
• Process switch from waiting to ready state
For e.g.: Round Robin, Priority algorithms
Non-Preemptive Scheduling
Once the CPU has been allocated to a process the process keeps the CPU until
it releases CPU either by terminating or by switching to waiting state.
• Throughput is high.
• It is not suitable for RTS.
• Processes having any priority can get scheduled.
• It treats all process as equal.
• Algorithm design is simple.
Circumstances for Non preemptive
• Process switches from running to waiting state
• Process terminates
For e.g.: FCFS algorithm It is suitable for RTS.
Q2. Explain first come first served (FCFS) algorithm. Give one example.
First-Come - First-Served (FCFS) Scheduling FCFS scheduling is non
preemptive algorithm.
• Once the CPU is allocated to a process, it keeps the CPU until it releases
the CPU, either by terminating or by requesting I/O.
• In this algorithm, a process, that a request the CPU first, is allocated the
CPU first. FCFS scheduling is implemented with a FIFO queue.
• When a process enters the ready queue, its PCB is linked to the tail of
the queue.
• When the CPU is available, it is allocated to the process at the head of
the queue. Once the CPU is allocated to a process, that process is
removed from the queue.
• The process releases the CPU by its own.
Example:

Suppose that the processes arrive in the order: P1, P2, P3 Gantt Chart:

Average waiting time: (0 + 24 + 27)/3 = 17


Average turnaround time: (24 + 27 + 30)/3 = 27

Q3. Shortest Job First(SJF) Scheduling


Shortest Job First scheduling works on the process with the shortest burst
time or duration first.

• This is the best approach to minimize waiting time.


• This is used in Batch Systems.
• It is of two types:
1. Non Pre-emptive
2. Pre-emptive
• To successfully implement it, the burst time/duration time of the processes
should be known to the processor in advance, which is practically not feasible
all the time.
• This scheduling algorithm is optimal if all the jobs/processes are available at
the same time. (either Arrival time is 0 for all, or Arrival time is same for all).

Q4. Explain SRTN scheduling.

SRTN is the pre-emptive counterpart of SJF.

• It is very useful in time sharing environment.


• In this scheduling algorithm the process with the
• smallest estimated run-time to completion is run next including new
arrival.
• SRTN has got higher overhead than SJF.
• The SRTN needs to track elapsed time of the currently running process and
also it should handle occasional pre-emption in a proper manner.
• The major point is that in SRTN, arrival of small processes will run
immediately. But longer jobs may have longer mean waiting time.

Q5. Define deadlock.


A Deadlock is a situation where each of the computer process waits for a
resource which is being assigned to some another process. In this situation,
none of the process gets executed since the resource it needs, is held by
some other process which is also waiting for some other resource to be
released.

Let us assume that there are three processes P1, P2 and P3. There are three
different resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to
P2 and R3 is assigned to P3.

After some time, P1 demands for R1 which is being used by P2. P1 halts
its execution since it can't complete without R2. P2 also demands for R3
which is being used by P3. P2 also stops its execution because it can't
continue without R3. P3 also demands for R1 which is being used by P1
therefore P3 also stops its execution.

In this scenario, a cycle is being formed among the three processes. None
of the process is progressing and they are all waiting. The computer
becomes unresponsive since all the processes got blocked.


Q6. Describe any four condition for deadlock.
1. Mutual exclusion: Only one process at a time can use non-sharable resource.
2. Hold and wait: A process is holding at least one resource and is waiting to
acquire additional resources held by other processes.
3. No pre-emption: A resource can be released only voluntarily by the process
holding it after that
process completes its task.
4. Circular wait: There exists a set {P0, P1, …, P0} of waiting processes such
that P0 is waiting for a
resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–
1 is waiting for a
resource that is held by Pn, and Pn is waiting for a resource that is held by P0.

Deadlock prevention conditions:-

1. Preventing Mutual exclusion condition

2. Preventing Hold and wait condition

3. Preventing No preemption condition

4. Preventing Circular wait condition


Deadlocks can be prevented by preventing at least one of the four required
conditions:

Mutual Exclusion

Shared resources such as read-only files do not lead to deadlocks. Unfortunately


some resources, such as printers and tape drives, require exclusive access by a
single process.

Hold and Wait

• To prevent this condition processes must be prevented from holding one or


more resources while simultaneously waiting for one or more others. There
are several possibilities for this:
• Require that all processes request all resources at one time. This can be
wasteful of system resources if a process needs one resource early in its
execution and doesn't need some other resource until much later.

No Preemption

• Preemption of process resource allocations can prevent this condition of


deadlocks, when it is possible.
• One approach is that if a process is forced to wait when requesting a new
resource, then all other resources previously held by this process are
implicitly released, ( preempted ), forcing this process to re-acquire the old
resources along with the new resources in a single request, similar to the
previous discussion.

Circular Wait

One way to avoid circular wait is to number all resources, and to require that
processes request resources only in strictly increasing ( or decreasing ) order.
In other words, in order to request resource Rj, a process must first release all Ri
such that i >= j. One big challenge in this scheme is determining the relative
ordering of the different resources

Explain Deadlock Avoidance with example.

Most prevention algorithms have poor resource utilization, and hence result in
reduced throughputs. Instead, we can try to avoid deadlocks by making use prior
knowledge about the usage of resources by processes including resources
available, resources allocated, future requests and future releases by processes.
Most deadlock avoidance algorithms need every process to tell in advance the
maximum number of resources of each type that it may need. Based on all this
info we may decide if a process should wait for a resource or not and thus avoid
chances for circular wait.

• Deadlock is avoided by following algorithms:


✓ Safe State
✓ Resource allocation graph
✓ Bankers Algorithm

Explain concept of Virtual Memory with Diagram


1. Virtual memory is the separation of user logical memory from physical
memory. This separation allows an extremely large virtual memory to be
provided for programmers when only a smaller physical memory is
available.
2. Virtual memory makes the task of programming much easier, because the
programmer no longer needs to worry about the amount of physical
memory available for execution of program.
3. It is the process of increasing the apparent size of a computer's RAM by
using a section of the hard disk storage as an extension of RAM.
4. As computers have RAM of capacity 64 or 128 MB to be used by the CPU
resources which is not sufficient to run all applications that are used by
most users in their expected way and all at once.
What is a File ?

A file can be defined as a data structure which stores the sequence of records.
Files are stored in a file system, which may exist on a disk or in the main memory.
Files can be simple (plain text) or complex (specially-formatted).

The collection of files is known as Directory. The collection of directories at the


different levels, is known as File System.

With neat diagram, explain file access methods.

There are two method to access file:


1. Sequential access
2. Direct access
1. Sequential Access Method: Information in the file is processed in order, one
record after the other. This mode of access is by far the beginning current position
most common; for example, editors and compilers usually access files in this
fashion.

Reads and writes make up the bulk of the operations on a file.


• A read operation read next reads the next portion of the file and automatically
advances a file pointer, which tracks the I/O location.
• Similarly, the write operation write next appends to the end of the file and
advances to the end of the newly written material (the new end of file)

Direct Access Method: A file is made up of fixed-length logical records that


allow programs to read and write records rapidly in no particular order. Thus, we
may read block 14, then read block 53, and then write block 7. There are no
restrictions on the order of reading or writing for a direct-access file. The direct-
access method is based on a disk model of a file, since disks allow random access
to any file block. Direct-access files are of great use for immediate access to large
amounts of information. A relative block number is an index relative to the
beginning of the file.
Attributes of the File

1.Name

Every file carries a name by which the file is recognized in the file system. One
directory cannot have two files with the same name.

2.Identifier

Along with the name, Each File has its own extension which identifies the type
of the file. For example, a text file has the extension .txt, A video file can have
the extension .mp4.

3.Type

In a File System, the Files are classified in different types such as video files,
audio files, text files, executable files, etc.

4.Location

In the File System, there are several locations on which, the files can be stored.
Each file carries its location as its attribute.

5.Size

The Size of the File is one of its most important attribute. By size of the file, we
mean the number of bytes acquired by the file in the memory.

6.Protection

The Admin of the computer may want the different protections for the different
files. Therefore each file carries its own set of permissions to the different group
of Users.

7.Time and Date

Every file carries a time stamp which contains the time and date on which the file
is last modified.

linked Allocation:
In this method, each file occupies disk blocks scattered anywhere on the disk.
It is a linked list of allocated blocks.
When space has to be allocated to the file, any free block can be used from the
disk and system makes an entry in directory.
Directory entry for allocated file contains file name, a pointer to the first
allocated block and last allocated block of the file.
The file pointer is initialized to nil value to indicate empty file.
A write to a file, causes search of free block.
After getting free block data is written to the file and that block is linked to the
end of the file.
To read the file, read blocks by following the pointers from block to block
starting with block address specified in the directory entry.
For example, a file of five blocks starting with block 9 and continue with block
16,then block 1,then block 10 an finally block 25.each allocated block contains a
pointer to the next block.

Fixed Partitioning

Fixed (or static) partitioning is one of the earliest and simplest memory
management techniques used in operating systems. It involves dividing the main
memory into a fixed number of partitions at system startup, with each partition
being assigned to a process. These partitions remain unchanged throughout the
system’s operation, providing each process with a designated memory space. This
method was widely used in early operating systems and remains relevant in
specific contexts like embedded systems and real-time applications. However,
while fixed partitioning is simple to implement, it has significant limitations,
including inefficiencies caused by internal fragmentation.

Define Page Fault

A page fault is an error that occurs when a program attempts to access data that
is not currently in the main memory or random access memory (RAM).

Page faulting is an issue that affects all modern operating systems, including
Linux. A page fault typically occurs when a process attempts to access memory
in a virtual address space that it does not own.

Explain segmentation with example.

In an operating system, segmentation is a memory management technique that


divides a program into smaller parts, called segments. Each segment
represents a logical unit of the program, such as code, data, or stack.
Segmentation helps in organizing and protecting the program's memory.

Example:
Imagine you are writing an application with different sections: one for
instructions, one for user data, and one for temporary data (stack). In
segmentation, these sections are separated into different segments, each with its
own memory space.

So, if your program has:

1. Code segment - for program instructions.

2. Data segment - for variables and constants.

3. Stack segment - for temporary data storage during execution.

Each of these segments can grow or shrink independently, and the operating
system manages them separately. This helps in protecting the program’s data
from being overwritten, as each segment has specific permissions and memory
allocation.

How demand paging is performed?

Demand paging is a memory management technique used in operating


systems to load pages of data into memory only when they are needed. Instead
of loading an entire program into memory at once, only the required parts
(pages) are loaded on demand, which helps save memory space and improves
efficiency.

How Demand Paging Works:

1. Page Request: When a program tries to access a specific part of data or code
that is not currently in memory, the operating system checks if the page (the
specific part) is already loaded
2. Page Fault: If the requested page is not in memory, a page fault occurs,
which is an alert to the operating system that the requested data is missing.

3. Loading the Page: The OS locates the page in the storage (e.g., hard disk or
SSD) and loads it into memory.

4. Resume Execution: Once the page is loaded, the program resumes from
where it left off, now able to access the requested page.

You might also like