C-CS316 Lect11 Virtual Memory
C-CS316 Lect11 Virtual Memory
Operating Systems
C-CS316 Spring 2024
LECTURE 11:
Virtual Memory
Dr. Basma Hassan Dr. Ahmed Salama
[email protected] [email protected]
Copy-on-Write
Page Replacement
Allocation of Frames
Thrashing
Memory-Mapped Files
Virtual memory is a memory management technique that provides an illusion to the process
that it has contiguous working memory, which is larger than the actual physical memory (RAM)
installed on the system.
Demand paging
Demand segmentation
• provides isolation between processes. Each process operates within its own virtual address
space, preventing one process from accessing with the memory of another process.
• With swapping, pager guesses which pages will be used before swapping out again
• Instead, pager brings in only those pages into memory
• How to determine that set of pages?
• Need new MMU functionality to implement demand paging
• If pages needed are already memory resident
• No difference from non demand-paging
• If page needed and not memory resident
• Need to detect and load the page into memory from storage
• Without changing program behavior
• Without programmer needing to change code
During MMU address translation, if valid–invalid bit in page table entry is i page fault
• When a page fault occurs, the operating system must bring the desired page from secondary
storage into main memory.
• Most operating systems maintain a free-frame list -- a pool of free frames for satisfying
such requests.
• Operating system typically allocate free frames using a technique known as zero-fill-on-
demand -- the content of the frames zeroed-out before being allocated.
• When a system starts up, all available memory is placed on the free-frame list.
• The Effective Access Time (EAT) represents the average time it takes to perform a
memory access, taking into account both the time required for accessing data stored in
memory and the additional overhead incurred in case of a page fault.
• Page Fault Rate 0 p 1 •When 𝑝=0, it means there are no page faults. This implies
• if p = 0 no page faults all pages that the program needs are already in memory
• if p = 1, every reference is a fault
•When 𝑝=1, it means every memory reference results in a
• Effective Access Time (EAT) page fault.
EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in )
• If either process modifies a shared page, only then is the page copied
• COW allows more efficient process creation as only modified pages are copied
• Pool should always have free frames for fast demand page execution
• Don’t want to have to free a frame as well as other processing on page fault
• Page replacement – find some page in memory, but not really in use, page it out
• Use modify (dirty) bit to reduce overhead of page transfers – only modified pages
are written to disk
3. Bring the desired page into the (newly) free frame; update the page and frame tables
• Page-replacement Technique
14 page faults
• Replace page that will not be used for longest period of time
• 9 is optimal for the example
• How do you know this?
• Can’t read the future
• Used for measuring how well your algorithm performs
9 page faults
• Since this algorithm requires knowledge of future memory accesses, it is often used as a theoretical
benchmark rather than a practical algorithm.
• Counter implementation
• Every page entry has a counter; every time page is referenced through this entry, copy
the clock into the counter
• When a page needs to be changed, look at the counters to find smallest value
• Search through table needed
• Stack implementation
• Keep a stack of page numbers in a double link form:
• Page referenced:
• move it to the top
• requires 6 pointers to be changed
• But each update more expensive
• No search for replacement
• LRU and OPT are cases of stack algorithms that don’t have Belady’s Anomaly
• Second-chance algorithm
• Generally FIFO, plus hardware-provided reference bit
• Clock replacement
• Fixed allocation
• Priority allocation
• Equal allocation – For example, if there are 100 frames (after allocating frames for the OS)
and 5 processes, give each process 20 frames
• Keep some as free frame buffer pool
m = 64
si = size of process pi
s1 = 10
S = si
s2 = 127
m = total number of frames 10
a1 = ´ 62 » 4
s 137
ai = allocation for pi = i m
S 127
a2 = ´ 62 » 57
137
• Global replacement – process selects a replacement frame from the set of all
frames; one process can take a frame from another
• Local replacement – each process selects from only its own set of allocated frames
• If a process does not have “enough” pages, the page-fault rate is very high
• Page fault to get page
• Replace existing frame
• But quickly need replaced frame back
Locality model
• Process migrates from one locality to another
• Localities may overlap
• Prepage all or some of the pages a process will need, before they are referenced
• But if prepaged pages are unused, I/O and memory was wasted
• Consider I/O - Pages that are used for copying a file from
a device must be locked from being selected for eviction
by a page replacement algorithm