0% found this document useful (0 votes)
19 views23 pages

Wa0011.

Demand paging reduces memory requirements and swap time by only swapping necessary pages, thus enhancing multiprogramming. However, it can lead to page faults when requested pages are not in RAM. Various page replacement algorithms, such as FIFO, LRU, and Optimal, manage which pages to swap out to optimize memory usage.

Uploaded by

oanjali854
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views23 pages

Wa0011.

Demand paging reduces memory requirements and swap time by only swapping necessary pages, thus enhancing multiprogramming. However, it can lead to page faults when requested pages are not in RAM. Various page replacement algorithms, such as FIFO, LRU, and Optimal, manage which pages to swap out to optimize memory usage.

Uploaded by

oanjali854
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

advantage of demand paging:-

Its benefits are as follows:-

1:- This reduces the requirement of memory.

2:- We can use memory well in this.

3:- This reduces the swap time because we do not swap


the entire process but only swap the pages.

4:- This increases the level of multi programming.

Disadvantage of demand paging:-


There is a possibility of page fault in this. Page fault is a
situation when a page is required and that page is not
present in RAM

3. Performance of demand paging:

Q.11 what is page replacement algorithm

Ans. PAGE:- Page is called virtual page or memory page.


Virtual memory is divided into blocks of equal size called
pages. Page is the smallest unit of data in memory
management.

Similarly, Frame is a block of the same size as physical


memory
Page replacement algorithms are a technique in which the
operating system decides which page is to be swapped
out and which is not if the memory requires a page to be
written to the disk.

Page replacement is done when the requested page is not


found in the main memory.

There are the following types of page replacement


algorithms in the operating system.

FIFO page replacement algorithms


Optiml Page replacement algorithm
LRU Page replacement algorithms
LFU Page replacement algorithm
MFU Page replacement algorithm
Second chance page replacement algorithms
Page fault:- This is a type of interrupt, it happens when we
allocate memory for a new process or replace the page.

Page hit: – This happens when a process requests the


same memory that is already in the memory

fifo page replacement algorithm in hindi:-


This is called first in first out page replacement algorithms.
In this, the operating system maintains a FIFO queue so
that it can track all the pages of memory.

The page that comes first (oldest page) in the FIFO queue
is placed at the beginning of the queue and the page that
comes last (newest page) is placed at the end of the FIFO
queue. The page which comes first in the FIFO queue is
replaced first.

One of its problems is belady's anomaly. The fewer frames


there are, the higher the page fault rate. This happens
only in some cases.

Now let us understand this with an example

Here (+) = page fault

(*) = page hit

We have taken three frames in it.

First of all, if pages 2,3,4 get empty slots then page fault
will occur three times.

Now we go to 1 no. Page will be replaced by 2 because 2


comes first, similarly 7 no. Replaced 3 for Page

Then there will be no page replace for no.page 4 because


it is already in memory. Here the page will be hit.
For 4 no.page, replace 2. For 5, replace 1.

Now 7 is already in memory so the page will be hit and no


page will be replaced. Then replace 7 for 1

Similarly, in this example there will be 8 page faults and 2


page hits.

Optimal Page replacement algorithm in Hindi:-


The page fault rate is very low in this. The page which is
not used for a long time is replaced. It looks into the future.
This means that the page which will be used late in the
future will be replaced first.
But we cannot implement it practically because in reality it
is difficult to find out which page will be used when in
future.

Let us see this through an example:

Here (+) = page fault

(*) = page hit


We have taken three frames in it.

First of all, if 2,3,1 get empty slots then page fault will
occur three times.

Now we will replace 5 with 1 because it will be of late use


in future. If there is already 3 then the page will be hit.

Will replace 4 with 3.

Replace 7 with 4 because the use of 4 will be late.

If 2 is already there then the page will be hit

If 5 is already in memory then the page will be hit back.

We will replace 4 with 2 because 2 comes first among 2, 5,


7.

Because there is no page after 4 here, FIFO will be used.

lRU Page replacement algorithm in Hindi :-


This is called least recently used page replacement
algorithm. In this, that page is replaced which has not
been used in the memory for a long time. It looks at the
past side. This is also similar to optimal page replacement
but it looks at the past side.
It is easy to use this algorithm practically. This algorithm
assumes that pages which have not been used for a long
time will still be used late.

Let us see this through an example

Here (+) = page fault

(*) = page hit

Page fault=9, page hit=1

We have taken three frames in it.

First of all, if 2 and 1 get empty slots then page fault will
happen twice.

Then if there is already 1 then the page will be hit.

Then if there is empty slot in memory for 3 then no page


will be replaced and page fault will occur.
5 no. If the page is not in memory, then for this we will
replace the page which has not been used for a long time.
If 2 is not used here, we will replace it with this.

Similarly, 2no.page will be replaced by 1, 0 by 3 and 3 by


5.

1 will be replaced with 3 and then 5 will be replaced with 0.

lFU Page replacement algorithms hindi


This is called least frequently used page replacement
algorithm. This is a modified version of FIFO. Frequency is
used in this. In this, the page which has the lowest
frequency is chosen for page replacementIf the frequency
of all the pages available in the FRAME is the same, then
the page which comes first will be replaced like FIFO. In
this the frequency is more or less.

For example

Here (+) = page fault


(*) = page hit

This is a frequency chart in which the frequency is


increasing with the arrival of the page

First of all, 0 frequency was given to all the pages. Then


as more processes start, the frequency will increase or
decrease.

We have taken three frames in it.

First of all 5, 4, 2 got empty frame and their frequency


became 1.

Then 5 was substituted for 3 because 5 came first. Due to


the same frequency of all three, FIFO method was used.

If 1 is already in memory then the page will be hit and the


frequency of 2 will increase from 1 to 2.

Then if 4 is replaced by 1, the frequency of 4 becomes 0,


1 becomes 1, and 2 becomes 1.

Then replaced 3 with 4. Frequency 4 became 1 and 3


became 0 only.

Then 2 was replaced with 0, 1 with 3, 4 with 1. Among


these three, FIFO was used after the lowest frequency.

MFU Page replacement algorithms in hindi


This is called the most frequently used page replacement
algorithm. This is exactly the opposite of the LFU page
replacement algorithm. Frequency is also used in this. In
this, those pages which have the highest frequency are
selected for replacement. In this, only those pages which
are used more will be replaced.

Second chance page replacement algorithms in hindi


This is also a modified version of FIFO. Reference bit is
used in this. Each page is given a reference bit.

Reference bit:- This tells how many times a page has


come into memory and has not been replaced. If the
reference bit is 0 then the page has come only once and if
it is 1 then the page has come twice and has not been
replaced. If the page is hit, the reference bit is increased
and if the page fault occurs, the reference bit is
decreased.

If the reference bit of all the pages of the frame is 0 then


FIFO algorithm is used and if the reference bit of any page
is 1 then FIFO is used except that page

Here (+) = page fault

(*) = page hit

Here 1, 3 got empty slots and 2 pages got faulted.

1 is already in memory so the page will be hit. The


reference bit of 1 will become 1. If even 2 gets an empty
slot then page fault will occur.

3 was replaced by 5. 1 was 1 bit of no page, so whichever


came first among 3 and 2 is 3, so replaced with that. 1
was given a second chance. The reference bit of 1 again
became 0. Similarly. The rest of the process was also
done

Unit :4

Q.1 Explain The structure of how disks are managed.

Ans. From 20 page no 119

Q.2 Explain the disk scheduling algorithm based on the


following points
FCFS , SSTF, SCAN, C-SCAN, LOOK

Ans. disk scheduling

We can do many operations at one time in a computer, so


it is very important to manage the requests of all those
operations that run at the same time in the system.

Disk scheduling is used by the operating system to control


all these requests and make memory available to them.

In this, the CPU time is divided among all the requests,


due to which disk scheduling determines which process
will be executed by the CPU at which time.
Scheduling means executing all the requests given to the
CPU at one time.

In simple words, disk scheduling is used to reduce the


seek time of any request. Since the computer receives
many requests for operations at a time, the system
becomes very slow. So it becomes very important to
schedule these requests so that the efficiency of the
system is not affected.

Disk scheduling is also called I/O scheduling.

disk scheduling algorithms :-

- FCFS (first come first serve)


- SSTF (shortest seek time first)
- scan
- C-scan scheduling

fCFS (first come first serve):-


FCFS scheduling is the simplest of all scheduling
algorithms.

In FCFS, whichever request comes first is executed first.


In this, all the requests remain in a queue one after the
other. All requests have a sequence and they are
executed by the CPU in the same order.

The advantage of this algorithm is that it is very easy to


implement.
The disadvantage of this algorithm is that it does not
reduce seek time.

SSTF(shortest seek time first):-


In the sSTF algorithm, the request which has the shortest
seek time is executed first, that is, the request which will
take the least amount of CPU time to run will be executed
first.

In this, all the requests are examined and according to


their seek time, they are arranged in order and the request
which has the least seek time is executed first.

SSTF is better than FCFS because it reduces the average


response time of the system and increases the throughput
of the system.

But its disadvantage is that it does not fulfill some requests


because if the seek time of the request is more than the
time of the incoming request

scan scheduling:-
This algorithm is also called elevator algorithm.

In this algorithm, the requests found in the path are


executed by scanning the disk towards the bottom and
then the requests found in the path are executed by
scanning upwards from the bottom end of the disk.

When the scan is completed, if any request comes after


that, it will not complete it until it comes back scanning
from bottom to top.
The advantage of SCAN is that it provides higher
throughput and lower average response time than FCFS
and SSTF.

But its disadvantage is that it has to wait the longest for


that request whose location in the disk has been recently
visited.

C-scan scheduling:-
In c scan scheduling all the requests are arranged through
circular list. Circular list is a list which has neither starting
point nor end point. That is, the end point is also the
starting point.

c scan scheduling is similar to scan scheduling but in this,


requests are executed from the starting point of the disk till
the end point and after reaching the end point, they come
back to the starting point

Look : from 20

Q.3 Explain what swap space management


Ans. Swapping is a memory management scheme, it is a
technique in which the process is removed from the main
memory and stored in the secondary memory.

It is used to improve main memory utilization.

The area in secondary memory where the swapped out


process is stored is called swap space.

In a unitasking operating system, only one process


occupies the user program area of ​memory and remains in
memory until the process is completed.
A situation arises in a multitasking operating system when
all the active processes cannot be accommodated in the
main memory, then one process is swapped out of the
main memory so that other processes can come into it.

Its purpose is to access the data present in the hard disk


and bring it into RAM so that application programs can use
it.

The thing to remember is that swapping is used only when


the data is not present in RAM

Advantages:
1. Increased memory capacity: Swap-space management
allows the operating system to use hard disk space as
virtual memory, effectively increasing the available
memory capacity.

2. Improved system performance: By using virtual


memory, the operating system can swap out less
frequently used data from physical memory to disk, freeing
up space for more frequently used data and improving
system performance.

3. Flexibility: Swap-space management allows the


operating system to dynamically allocate and deallocate
memory as needed, depending on the demands of running
applications.

Disadvantages:
1. Slower access times: Accessing data from disk is
slower than accessing data from physical memory, which
can result in slower system performance if too much
swapping is required.

2. Increased disk usage: Swap-space management


requires disk space to be reserved for use as virtual
memory, which can reduce the amount of available space
for other data storage purposes.

3. Risk of data loss: In some cases, if there is a problem


with the swap file, such as a disk error or corruption, data
may be lost or corrupted.
Overall, swap-space management is a useful technique
for optimizing memory usage and improving system
performance. However, it is important to carefully manage
swap space allocation and monitor system performance to
ensure that excessive swapping does not negatively
impact system performance.

Swap-Space :
The area on the disk where the swapped-out processes
are stored is called swap space.

Swap-Space Management :
Swap-Space management is another low-level task of the
operating system. Disk space is used as an extension of
main memory by the virtual memory. As we know the fact
that disk access is much slower than memory access, In
the swap-space management we are using disk space, so
it will significantly decreases system performance.
Basically, in all our systems we require the best
throughput, so the goal of this swap-space implementation
is to provide the virtual memory the best throughput. In
these article, we are going to discuss how swap space is
used, where swap space is located on disk, and how swap
space is managed.

Swap-Space Use :
Swap-space is used by the different operating-system in
various ways. The systems which are implementing
swapping may use swap space to hold the entire process
which may include image, code and data segments.
Paging systems may simply store pages that have been
pushed out of the main memory. The need of swap space
on a system can vary from a megabytes to gigabytes but it
also depends on the amount of physical memory, the
virtual memory it is backing and the way in which it is
using the virtual memory.

It is safer to overestimate than to underestimate the


amount of swap space required, because if a system runs
out of swap space it may be forced to abort the processes
or may crash entirely. Overestimation wastes disk space
that could otherwise be used for files, but it does not harm
other.

Following table shows different system using amount of


swap space:

Figure – Different systems using amount of swap-space

Explanation of above table :


Solaris, setting swap space equal to the amount by which
virtual memory exceeds page-able physical memory. In
the past Linux has suggested setting swap space to
double the amount of physical memory. Today, this
limitation is gone, and most Linux systems use
considerably less swap space.

Including Linux, some operating systems; allow the use of


multiple swap spaces, including both files and dedicated
swap partitions. The swap spaces are placed on the disk
so the load which is on the I/O by the paging and
swapping will spread over the system’s bandwidth.

Figure – Location of swap-space

A swap space can reside in one of the two places –

Normal file system


Separate disk partition
Let, if the swap-space is simply a large file within the file
system. To create it, name it and allocate its space normal
file-system routines can be used. This approach, through
easy to implement, is inefficient. Navigating the directory
structures and the disk-allocation data structures takes
time and extra disk access. During reading or writing of a
process image, external fragmentation can greatly
increase swapping times by forcing multiple seeks.

There is also an alternate to create the swap space which


is in a separate raw partition. There is no presence of any
file system in this place. Rather, a swap space storage
manager is used to allocate and de-allocate the blocks.
from the raw partition. It uses the algorithms for speed
rather than storage efficiency, because we know the
access time of swap space is shorter than the file system.
By this Internal fragmentation increases, but it is
acceptable, because the life span of the swap space is
shorter than the files in the file system. Raw partition
approach creates fixed amount of swap space in case of
the disk partitioning.

Some operating systems are flexible and can swap both in


raw partitions and in the file system space, example:
Linux.

Swap-Space Management: An Example –


The traditional UNIX kernel started with an implementation
of swapping that copied entire process between
contiguous disk regions and memory. UNIX later evolve to
a combination of swapping and paging as paging
hardware became available. In Solaris, the designers
changed standard UNIX methods to improve efficiency.
More changes were made in later versions of Solaris, to
improve the efficiency.
Linux is almost similar to Solaris system. In both the
systems the swap space is used only for anonymous
memory, it is that kind of memory which is not backed by
any file. In the Linux system, one or more swap areas are
allowed to be established. A swap area may be in either in
a swap file on a regular file system or a dedicated file
partition.

Figure – Data structure for swapping on Linux system

Each swap area consists of 4-KB page slots, which are


used to hold the swapped pages. Associated with each
swap area is a swap-map- an array of integers counters,
each corresponding to a page slot in the swap area. If the
value of the counter is 0 it means page slot is occupied by
swapped page. The value of counter indicates the number
of mappings to the swapped page. For example, a value 3
indicates that the swapped page is mapped to the 3
different processes.

Q.4 introduce linux operating system


Ans. Linux is an open-source operating system that acts
as an interface between the computer and the user.

It was invented by Linus Torvalds in 1991.

Linux is an open-source operating system, which means


that anyone can use it for free and modify it.

Windows is a closed-source operating system, meaning it


cannot be modified. Only the company can modify it.
Whereas in Linux we can also make changes as per our
convenience.

Initially, Linux was created only for personal computers but


later it started being used in servers, mainframe
computers, and supercomputers. Apart from this, Linux is
also used in routers, cars, television (TV), smart watches,
and video game consoles.

Nowadays Linux Kernel is also used in Android.

Linux is the fastest growing operating system. Nowadays it


is used in everything from smart watches to
supercomputers.

In other words, “Linux is a system software that maintains


the hardware and resources of the computer system.
Apart from this, it helps in communication between
hardware and software

structure of Linux – Structure of Linux


Below you are given a picture of Linux structure-

You might also like