Lec15 Pagereplace
Lec15 Pagereplace
Faulting
Faulting
Faulting
Faulting
Inst 2
Inst 1
Inst 2
Inst 1
User
TLB Faults
Fetch page/
OS Load TLB
Load TLB
Tail (LRU)
– On each use, remove page from list and place at head
– LRU page is at tail
• Problems with this scheme for paging?
– Need to know immediately when each page used so that
can change position in list…
– Many instructions for each hardware access
• In practice, people approximate LRU (more later)
3/15/06 Joseph CS162 ©UCB Spring 2006 Lec 15.10
Example: FIFO
• Suppose we have 3 page frames, 4 virtual pages, and
following reference stream:
–A B C A B D A D B C B
• Consider FIFO Page replacement:
Ref: A B C A B D A D B C B
Page:
1 A D C
2 B A
3 C B
– FIFO: 7 faults.
– When referencing D, replacing A is bad choice, since
need A again right away
Ref: A B C A B D A D B C B
Page:
1 A C
2 B
3 C D
– MIN: 5 faults
– Where will D be brought in? Look for page not
referenced farthest in future.
• What will LRU do?
– Same decisions as MIN here, but won’t always be true!
3/15/06 Joseph CS162 ©UCB Spring 2006 Lec 15.12
Administrivia
3/15/063 CJoseph
D CS162 ©UCB Spring 2006 Lec 15.14
Graph of Page Faults Versus The Number of Frames
O
Directly Second
ve
rf
Mapped Pages Chance List
lo
w
ss
Marked: RW c e Marked: Invalid
List: FIFO Ac List: LRU
New New
Page-in Active
From disk SC
Pages Victims
• Split memory in two: Active list (RW), SC list (Invalid)
• Access pages in Active list at full speed
• Otherwise, Page Fault
– Always move overflow page from end of Active list to
front of Second-chance list (SC) and mark invalid
– Desired Page On SC List: move to front of Active list,
mark RW
– Not on SC list: page in to front of Active list, mark RW;
page out LRU victim at end of SC list
3/15/06 Joseph CS162 ©UCB Spring 2006 Lec 15.22
Second-Chance List Algorithm (con’t)
• How many pages for second chance list?
– If 0 ⇒ FIFO
– If all ⇒ LRU, but page fault on every page reference
• Pick intermediate value. Result is:
– Pro: Few disk accesses (page only goes to disk if unused
for a long time)
– Con: Increased overhead trapping to OS (software /
hardware tradeoff)
• With page translation, we can adapt to any kind of
access the program makes
– Later, we will show how to use page translation /
protection to share memory between threads on widely
separated machines
• Question: why didn’t VAX include “use” bit?
– Strecker (architect) asked OS people, they said they
didn’t need it, so didn’t implement it
– He later got blamed, but VAX did OK anyway
3/15/06 Joseph CS162 ©UCB Spring 2006 Lec 15.23
Free
List
Single Clock Hand:
Advances as needed to keep
freelist full (“background”)
D
Set of all pages
in Memory
D
Free Pages
For Processes
• Keep set of free pages ready for use in demand paging
– Freelist filled in background by Clock algorithm or other
technique (“Pageout demon”)
– Dirty pages start copying back to disk when enter list
• Like VAX second-chance list
– If page needed before reused, just return to active set
• Advantage: Faster for page fault
– Can always use page (or pages) immediately on fault
3/15/06 Joseph CS162 ©UCB Spring 2006 Lec 15.24
Demand Paging (more details)
• Does software-loaded TLB need use bit?
Two Options:
– Hardware sets use bit in TLB; when TLB entry is
replaced, software copies use bit back to page table
– Software manages TLB entries as FIFO list; everything
not in TLB is Second-Chance list, managed as strict LRU
• Core Map
– Page tables map virtual page → physical page
– Do we need a reverse mapping (i.e. physical page →
virtual page)?
» Yes. Clock algorithm runs through page frames. If sharing,
then multiple virtual-pages per physical page
» Can’t push page out to disk without invalidating all PTEs