0% found this document useful (0 votes)
28 views17 pages

Virtual Memory

Virtual memory is a memory management technique that allows applications to run larger than the available physical memory by using disk space as an extension of RAM. It supports multiprogramming and enhances CPU utilization by enabling processes to operate with portions of their address space in RAM. Key concepts include paging, segmentation, page faults, and various page replacement algorithms, which are essential for managing memory efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views17 pages

Virtual Memory

Virtual memory is a memory management technique that allows applications to run larger than the available physical memory by using disk space as an extension of RAM. It supports multiprogramming and enhances CPU utilization by enabling processes to operate with portions of their address space in RAM. Key concepts include paging, segmentation, page faults, and various page replacement algorithms, which are essential for managing memory efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 17

Unit-4 Virtual Memory

Virtual memory is a memory management technique used by


operating systems to give the appearance of a large, continuous
block of memory to applications, even if the physical memory (RAM)
is limited. It allows larger applications to run on systems with less
RAM.
 The main objective of virtual memory is to support
multiprogramming, The main advantage that virtual memory
provides is, a running process does not need to be entirely in
memory.
 Programs can be larger than the available physical memory.
Virtual Memory provides an abstraction of main memory,
eliminating concerns about storage limitations.
 A memory hierarchy, consisting of a computer system’s memory
and a disk, enables a process to operate with only some
portions of its address space in RAM to allow more processes to
be in memory.
 A virtual memory is what its name indicates- it is an illusion
of a memory that is larger than the real memory. We refer
to the software component of virtual memory as a virtual
memory manager. The basis of virtual memory is the non
contiguous memory allocation model.
.

The main visible advantage of this scheme is that programs can be larger
than physical memory. Virtual memory serves two purposes. First, it
allows us to extend the use of physical memory by using disk. Second, it
allows us to have memory protection, because each virtual address is
translated to a physical address.

Following are the situations, when entire program is not required to be


loaded fully in main memory.

 User written error handling routines are used only when an error
occurred in the data or computation.
 Certain options and features of a program may be used rarely.
 Many tables are assigned a fixed amount of address space even
though only a small amount of the table is actually used.
 The ability to execute a program that is only partially in memory
would counter many benefits.
 Less number of I/O would be needed to load or swap each user
program into memory.

 A program would no longer be constrained by the amount of


physical memory that is available.
 Each user program could take less physical memory, more
programs could be run the same time, with a corresponding
increase in CPU utilization and throughput.

ADVANTAGES

1. Balance the shortage of physical space.


2. Inexpensive virtual memory.
3. Supports multitasking and collaboration
4. Avoid memory fragmentation.
5. Enhance data security

How Virtual Memory Works?


Virtual Memory is a technique that is implemented using both
hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in
computer memory.
 All memory references within a process are logical
addresses that are dynamically translated into physical
addresses at run time. This means that a process can be
swapped in and out of the main memory such that it occupies
different places in the main memory at different times during
the course of execution.
 A process may be broken into a number of pieces and these
pieces need not be continuously located in the main
memory during execution. The combination of dynamic run-
time address translation and the use of a page or segment table
permits this.
Types of Virtual Memory

In a computer, virtual memory is managed by the Memory


Management Unit (MMU), which is often built into the CPU. The CPU
generates virtual addresses that the MMU translates into physical
addresses.
There are two main types of virtual memory:
 Paging
 Segmentation
Paging
Paging divides memory into small fixed-size blocks called pages.
When the computer runs out of RAM, pages that aren’t currently in
use are moved to the hard drive, into an area called a swap file. The
swap file acts as an extension of RAM. When a page is needed again,
it is swapped back into RAM, a process known as page swapping. This
ensures that the operating system (OS) and applications have enough
memory to run.
Advantages of Paging:
 Eliminates external fragmentation.
 Simplifies memory allocation.
 Allows for efficient use of physical memory.
Demand Paging: The process of loading the page into memory on
demand (whenever a page fault occurs) is known as demand paging.
The process includes the following steps are as follows:

Demand Paging
 If the CPU tries to refer to a page that is currently not available
in the main memory, it generates an interrupt indicating a
memory access fault.
 The OS puts the interrupted process in a blocking state. For the
execution to proceed the OS must bring the required page into
the memory.
 The OS will search for the required page in the logical address
space.
 The required page will be brought from logical address space to
physical address space. The page replacement algorithms are
used for the decision-making of replacing the page in physical
address space.
 The page table will be updated accordingly.
 The signal will be sent to the CPU to continue the program
execution and it will place the process back into the ready
state.
Hence whenever a page fault occurs these steps are followed by the
operating system and the required page is brought into memory.
Segmentation
Segmentation divides virtual memory into segments of different
sizes. Segments that aren’t currently needed can be moved to the
hard drive. The system uses a segment table to keep track of each
segment’s status, including whether it’s in memory, if it’s been
modified, and its physical address. Segments are mapped into a
process’s address space only when needed.

 Page Hit: It occurs when a program references a memory page


that is already present in the main physical memory (RAM). This
leads to quick retrieval of the required data without the need
for additional disk access

 Page Miss: It happens when a program accesses a memory


page that is not currently residing in the main memory. This
triggers a page fault and requires the OS to bring the missing
page into RAM from secondary storage

 Page Fault Time: It refers to the total time taken to handle a


page fault. It includes the time to find the required page on
disk, swap it into RAM, update data structures, and restart the
interrupted program

 Page Fault Delay: This term denotes the time interval between
the moment a page fault occurs and the time when the
corresponding page is fully loaded into the main memory,
allowing the program to continue its execution

What is Page Fault in Operating System


When the page accessed by the CPU is not found in the main memory, the
situation is referred to as Page Fault. If the required page is not loaded into
the memory, then a page fault trap arises.
To recover from this, the required page has to be fetched from the secondary
memory(hard disk) into the main memory.

The page fault primarily causes an exception, which informs the operating
system that it will need to retrieve "pages" from virtual memory in order to
continue the execution. The software resumes normal operation once all of
the data has been placed into physical memory. The Page Fault procedure
happens in the background, so the user isn't aware of it. The computer's
hardware is connected to the kernel, and the program counter (PC) is usually
saved on the stack. The information about the current status of instruction is
stored in CPU registers.

Describe the Actions Taken by the Operating System When a Page Fault
Occurs.
When a page fault occurs in an operating system, it means that the requested
memory page isn't currently in physical RAM but rather resides in secondary
storage, like a hard drive or SSD. Here are the actions taken by the OS:
 Page Fault Trap: The CPU detects a page fault while trying to access a
memory page that isn't currently in RAM. This triggers a page fault
exception, which transfers control to the operating system.
 Page Table Lookup: The operating system looks up the page table to
determine the location of the required page in secondary storage, such
as a disk.
 Fetch Page from Secondary Storage: The OS initiates a process to fetch
the required page from secondary storage into an available page frame
in RAM. This involves reading the page data from disk into a free page
frame.
 Update Page Table: Once the page is successfully brought into RAM, the
operating system updates the page table to reflect the new location of
the page in physical memory. This includes updating the page table entry
with the physical address of the page frame where the page now resides.
 Resume Process Execution: Finally, the operating system restarts the
interrupted memory access instruction that caused the page fault. With
the required page now available in RAM, the process can proceed as
expected, accessing the data it needs from memory.

Page Replacement Algorithms


Page replacement is needed in the operating systems that use virtual memory
using Demand Paging. As we know in Demand paging, only a set of pages of a
process is loaded into the memory. This is done so that we can have more
processes in the memory at the same time.

1- FIFO Page Replacement


The simplest page-replacement algorithm is a first-in, first-out (FIFO) algorithm. A FIFO
replacement algorithm associates with each page the time when that page was brought into
memory. When a page must be replaced, the oldest page is chosen. Notice that it is not
strictly necessary to record the time when a page is brought in.

In this, we maintain a queue of all the pages that are in the memory currently. The
oldest page in the memory is at the front end of the queue and the most recent page
is at the back or rear end of the queue.

Whenever a page fault occurs, the operating system looks at the front end of the
queue to know the page to be replaced by the newly requested page. It also adds
this newly requested page at the rear end and removes the oldest page from the
front end of the queue.

Example: Consider the page reference string


as 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frames. Let’s try to find the number
of page faults:
Belady's anomaly: Generally if we increase the number of frames in
the memory, the number of page faults should decrease due to
obvious reasons. Belady’s anomaly refers to the phenomena where
increasing the number of frames in memory, increases the page
faults as well.
Advantages
 Simple to understand and implement
 Does not cause more overhead
Disadvantages
 Poor performance
 Doesn’t use the frequency of the last used time and just simply
replaces the oldest page.
 Suffers from Belady’s anomaly.

Optimal Page Replacement in OS


 Optimal page replacement is the best page replacement algorithm as this
algorithm results in the least number of page faults. In this algorithm, the
pages are replaced with the ones that will not be used for the longest
duration of time in the future. In simple terms, the pages that will be referred
to farthest in the future are replaced in this algorithm.
 Example:

Advantages
 Excellent efficiency
 Less complexity
 Easy to use and understand
 Simple data structures can be used to implement
 Used as the benchmark for other algorithms
Disadvantages
 More time consuming
 Difficult for error handling
 Need future awareness of the programs, which is not possible
every time

Least Recently Used (LRU) Page Replacement Algorithm

 The least recently used page replacement algorithm keeps the track of usage
of pages over a period of time. This algorithm works on the basis of
the principle of locality of a reference which states that a program has a
tendency to access the same set of memory locations repetitively over a
short period of time. So pages that have been used heavily in the past are
most likely to be used heavily in the future also.
 In this algorithm, when a page fault occurs, then the page that has not
been used for the longest duration of time is replaced by the newly
requested page.
 Example: Let’s see the performance of the LRU on the same reference
string of 3, 1, 2, 1, 6, 5, 1, 3 with 3-page frame

Advantages
 It is open for full analysis
 Doesn’t suffer from Belady’s anomaly
 Often more efficient than other algorithms
Disadvantages
 It requires additional data structures to be implemented
 More complex
 High hardware assistance is required

K
Hardware and Control Structures

The implementation of virtual memory relies on specific hardware


components and control structures to ensure efficient and secure operation.
Hardware Components
1. Memory Management Unit (MMU):
A hardware device that translates virtual addresses to physical addresses.
○ Handles address translation and access rights verification.
2. Translation Lookaside Buffer (TLB):
○ A cache that stores recent virtual-to-physical address translations.
○ Improves performance by reducing the need to access the page table
frequently.
3. Page Table:
○ A data structure maintained in memory to map virtual pages to physical
frames.
○ Each entry in the page table contains:
■Frame Number: The physical address of the page.
■Status Bits: Metadata, such as valid/invalid, access rights, and dirty bit.
4. Secondary Storage:
○ Used as a backing store for virtual memory.
○ Contains swap space for pages not currently in physical memory.

Control Structures

1. Page Table Entries (PTEs):


○ Contains information about the mapping of virtual pages to physical
frames.
○ Includes metadata for memory protection and page replacement.
2. TLB Management:
○ Handles TLB misses by accessing the page table and updating the TLB.
3. Page Replacement Algorithms:
○ Determines which page to evict from physical memory when a new page
needs to be loaded.
○ Common algorithms include:
■Least Recently Used (LRU)
■First-In, First-Out (FIFO)
■Optimal Page Replacement
4. Interrupt Handlers:
○ Handle page faults by loading the required page from secondary storage
into physical memory.

What is Locality of Reference?


Locality of reference refers to a phenomenon in which a computer
program tends to access same set of memory locations for a
particular time period. In other words, Locality of Reference refers to
the tendency of the computer program to access instructions whose
addresses are near one another. The property of locality of reference
is mainly shown by loops and subroutine calls in a program.

k
1. In case of loops in program control processing unit repeatedly
refers to the set of instructions that constitute the loop.
2. In case of subroutine calls, every time the set of instructions are
fetched from memory.
3. References to data items also get localized that means same
data item is referenced again and again.
k

In the above figure, you can see that the CPU wants to read or fetch
the data or instruction. First, it will access the cache memory as it is
near to it and provides very fast access. If the required data or
instruction is found, it will be fetched. This situation is known as a
cache hit. But if the required data or instruction is not found in the
cache memory then this situation is known as a cache miss. Now the
main memory will be searched for the required data or instruction
that was being searched and if found will go through one of the two
ways:
1. First way is that the CPU should fetch the required data or
instruction and use it and that’s it but what, when the same
data or instruction is required again. CPU again has to access
the same main memory location for it and we already know
that main memory is the slowest to access.
2. The second way is to store the data or instruction in the cache
memory so that if it is needed soon again in the near future it
could be fetched in a much faster way.
What is Cache Operation?
Cache operations are those operations that allow fast data retrieval
by using principle of locality of reference. They store the nearby or
same data in the cache memory if they are frequently accessed by
the CPU. There are two ways with which data or instruction is fetched
from main memory and get stored in cache memory. These two ways
are the following:
1. Temporal Locality – Temporal locality means current data or
instruction that is being fetched may be needed soon. So we
should store that data or instruction in the cache memory so
that we can avoid again searching in main memory for the same
data.
k
When CPU accesses the current main memory location for reading
required data or instruction, it also gets stored in the cache memory
which is based on the fact that same data or instruction may be
needed in near future. This is known as temporal locality. If some
data is referenced, then there is a high probability that it will be
referenced again in the near future.
Spatial Locality – Spatial locality means instruction or data near to
the current memory location that is being fetched, may be needed
soon in the near future. This is slightly different from the temporal
locality. Here we are talking about nearly located memory locations
while in temporal locality we were talking about the actual memory
location that was being fetched.

Cache Performance: The performance of the cache is measured in


terms of hit ratio. When CPU refers to memory and find the data or
instruction within the Cache Memory, it is known as cache hit. If the
desired data or instruction is not found in the cache memory and
CPU refers to the main memory to find that data or instruction, it is
known as a cache miss.
Hit + Miss = Total CPU Reference
Hit Ratio(h) = Hit / (Hit+Miss)
Miss Ratio = 1 - Hit Ratio(h)
Miss Ratio = Miss / (Hit+Miss)
Average access time of any memory system consists of two levels:
Cache and Main Memory. If Tc is time to access cache memory
and Tm is the time to access main memory then we can write:
Tavg = Average time to access memory

For simultaneous access


Tavg = h * Tc + (1-h)*Tm
For hierarchial access
Tavg = h * Tc + (1-h)*(Tm + Tc)

You might also like