Virtual Memory Can Slow Down Performance
Virtual Memory Can Slow Down Performance
it?
Real, or physical, memory exists on RAM chips inside the computer. Virtual memory,
as its name suggests, doesnt physically exist on a memory chip. It is an
optimization technique and is implemented by the operating system in order to give
an application program the impression that it has more memory than actually
exists. Virtual memory is implemented by various operating systems such as
Windows, Mac OS X, and Linux.
Lets say that an operating system needs 120 MB of memory in order to hold all the
running programs, but theres currently only 50 MB of available physical memory
stored on the RAM chips. The operating system will then set up 120 MB of virtual
memory, and will use a program called the virtual memory manager (VMM) to
manage that 120 MB. The VMM will create a file on the hard disk that is 70 MB (120
50) in size to account for the extra memory thats needed.
Now, how does the VMM function? As mentioned before, the VMM creates a file on
the hard disk that holds the extra memory that is needed by the O.S., which in our
case is 70 MB in size. This file is called a paging file (also known as a swap file), and
plays an important role in virtual memory. The paging file combined with the RAM
accounts for all of the memory. Whenever the O.S. needs a block of memory thats
not in the real (RAM) memory, the VMM takes a block from the real memory that
hasnt been used recently, writes it to the paging file, and then reads the block of
memory that the O.S. needs from the paging file. The VMM then takes the block of
memory from the paging file, and moves it into the real memory in place of the
old block. This process is called swapping (also known as paging), and the blocks of
memory that are swapped are called pages.
There are two reasons why one would want this: the first is to allow the use of
programs that are too big to physically fit in memory. The other reason is to allow
for multitasking multiple programs running at once.
In a computer operating system that uses paging for virtual memory management,
page replacement algorithms decide which memory pages to page out (swap out,
write to disk) when a page of memory needs to be allocated. Paging happens when
a page fault occurs and a free page cannot be used to satisfy the allocation, either
because there are none, or because the number of free pages is lower than some
threshold.
When the page that was selected for replacement and paged out is referenced
again it has to be paged in (read in from disk), and this involves waiting for I/O
completion. This determines the quality of the page replacement algorithm: the less
time waiting for page-ins, the better the algorithm. A page replacement algorithm
looks at the limited information about accesses to the pages provided by hardware,
and tries to guess which pages should be replaced to minimize the total number of
page misses, while balancing this with the costs (primary storage and processor
time) of the algorithm itself.
PAGE REPLACEMENT ALGORITHMS
Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not
in advance. When a context switch occurs, the operating system does not copy any
of the old programs pages out to the disk or any of the new programs pages into
the main memory Instead, it just begins executing the new program after loading
the first page and fetches that programs pages as they are referenced.
Oldest page in main memory is the one which will be selected for replacement.
Easy to implement, keep a list, replace pages from the tail and add new pages at the head.
FIFO is
not a stack algorithm. In certain cases, the number of page faults can actually increase when
more frames are allocated to the process. In the example below, there are 9 page faults for 3
frames and 10 page faults for 4 frames.
choose the page that was last referenced the longest time ago
can manage LRU with a list called the LRU stack or the paging stack (data structure)
in the LRU stack, the first entry describes the page referenced least recently, the last entry
describes to the last page referenced.
too slow to be used in practice for managing the page table, but many systems use
approximations to LRU
The page with the smallest count is the one which will be selected for replacement.
This algorithm suffers from the situation in which a page is used heavily during the initial
phase of a process, but then is never used again.
This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.
RAND (Random)
as an approximation to LRU, select one of the pages that has not been used recently (as
opposed to identifying exactly which one has not been used for the longest amount of
time)
keep one bit called the "used bit" or "reference bit", where 1 => used recently and 0 =>
not used recently
variants of this scheme are used in many operating systems, including UNIX and
MacIntosh
most variations use a scan pointer and go through the page frames one by one, in some
order, looking for a page that has not been used recently.
Variations of NRU:
Variation 1: Linear Scanning Algorithm
this algorithm is sometimes called the Clock Algorithm or the Second Chance algorithm,
but both terms are also used for the FIFO Second Chance algorithm described below.