Swapping Page Replacement Algorithms Thrashing
Swapping Page Replacement Algorithms Thrashing
Operating Systems
Session 18
Swapping
© 2016 KL University – The contents of this presentation are an intellectual and copyrighted property of KL University. ALL RIGHTS RESERVED 1
Beyond Physical Memory: Mechanisms
• Require an additional level in the memory hierarchy.
• OS need a place to stash away portions of address space that currently aren’t
in great demand.
• In modern systems, this role is usually served by a hard disk drive
Registers
Cache
Main Memory
2
Single large address for a process
• Always need to first arrange for the code or data to be in memory
when before calling a function or accessing data.
3
Swap Space
• Reserve some space on the disk for moving pages back and forth.
• OS need to remember to the swap space, in page-sized unit
PFN 0 PFN 1 PFN 2 PFN 3
4
Present Bit
• Add some machinery higher up in the system in order to support
swapping pages to and from the disk.
• When the hardware looks in the PTE, it may find that the page is not present
in physical memory.
Value Meaning
1 page is present in physical memory
0 The page is not in memory but rather on disk.
5
What If Memory Is Full ?
• The OS like to page out pages to make room for the new pages the OS
is about to bring in.
• The process of picking a page to kick out, or replace is known as page-
replacement policy
6
The Page Fault
• Accessing page that is not in physical memory.
• If a page is not present and has been swapped disk, the OS need to swap the
page into memory in order to service the page fault.
7
Page Fault Control Flow
PTE used for data such as the PFN of the page for a disk address.
Operating System
3. Check storage whether page is exist.
Secondary Storage
2.Trap
1. Reference
Load M
i
6. reinstruction
Page Frame
Page Frame
Virtual Address
When the OS receives a page fault, it looks in the PTE and issues the request to disk.
8
Page Fault Control Flow – Hardware
1: VPN = (VirtualAddress & VPN_MASK) >> SHIFT
2: (Success, TlbEntry) = TLB_Lookup(VPN)
3: if (Success == True) // TLB Hit
4: if (CanAccess(TlbEntry.ProtectBits) == True)
5: Offset = VirtualAddress & OFFSET_MASK
6: PhysAddr = (TlbEntry.PFN << SHIFT) | Offset
7: Register = AccessMemory(PhysAddr)
8: else RaiseException(PROTECTION_FAULT)
9
Page Fault Control Flow – Hardware
9: else // TLB Miss
10: PTEAddr = PTBR + (VPN * sizeof(PTE))
11: PTE = AccessMemory(PTEAddr)
12: if (PTE.Valid == False)
13: RaiseException(SEGMENTATION_FAULT)
14: else
15: if (CanAccess(PTE.ProtectBits) == False)
16: RaiseException(PROTECTION_FAULT)
17: else if (PTE.Present == True)
18: // assuming hardware-managed TLB
19: TLB_Insert(VPN, PTE.PFN, PTE.ProtectBits)
20: RetryInstruction()
21: else if (PTE.Present == False)
22: RaiseException(PAGE_FAULT)
10
Page Fault Control Flow – Software
1: PFN = FindFreePhysicalPage()
2: if (PFN == -1) // no free page found
3: PFN = EvictPage() // run replacement algorithm
4: DiskRead(PTE.DiskAddr, pfn) // sleep (waiting for I/O)
5: PTE.present = True // update page table with present
6: PTE.PFN = PFN // bit and translation (PFN)
7: RetryInstruction() // retry instruction
The OS must find a physical frame for the soon-be-faulted-in page to reside within.
If there is no such page, waiting for the replacement algorithm to run and kick some pages out of
memory.
11
When Replacements Really Occur
• OS waits until memory is entirely full, and only then replaces a page
to make room for some other page
• This is a little bit unrealistic, and there are many reason for the OS to keep a
small portion of memory free more proactively.
12
Beyond Physical Memory: Policies
• Memory pressure forces the OS to start paging out pages to make
room for actively-used pages.
• Deciding which page to evict is encapsulated within the replacement
policy of the OS.
13
The Optimal Replacement Policy
• Leads to the fewest number of misses overall
• Replaces the page that will be accessed furthest in the future
• Resulting in the fewest-possible cache misses
• Serve only as a comparison point, to know how close we are to
perfect
14
Tracing the Optimal Policy
Reference Row
0 1 2 0 1 3 0 3 1 2 1
15
A Simple Policy: FIFO
• Pages were placed in a queue when they enter the system.
• When a replacement occurs, the page on the tail of the queue(the
“First-in” pages) is evicted.
• It is simple to implement, but can’t determine the importance of blocks.
16
Tracing the FIFIO Policy
Reference Row
0 1 2 0 1 3 0 3 1 2 1
Even though page 0 had been accessed a number of times, FIFO still
kicks it out.
17
BELADY’S ANOMALY
• We would expect the cache hit rate to increase when the cache gets
larger. But in this case, with FIFO, it gets worse.
Reference Row
1 2 3 4 1 2 5 1 2 3 4 5
14
12
Page Fault Count
10
0
1 2 3 4 5 6 7
Page Frame Count
18
Another Simple Policy: Random
• Picks a random page to replace under memory pressure.
• It doesn’t really try to be too intelligent in picking which blocks to evict.
• Random does depends entirely upon how lucky Random gets in its choice.
Access Hit/Miss? Evict Resulting Cache State
0 Miss 0
1 Miss 0,1
2 Miss 0,1,2
0 Hit 0,1,2
1 Hit 0,1,2
3 Miss 0 1,2,3
0 Miss 1 2,3,0
3 Hit 2,3,0
1 Miss 3 2,0,1
2 Hit 2,0,1
1 Hit 2,0,1
19
Random Performance
• Sometimes, Random is as good as optimal, achieving 6 hits on the
example trace.
50
40
Frequency
30
20
10
0
1 2 3 4 5 6
Number of Hits
20
Using History
• Lean on the past and use history.
• Two type of historical information.
Historical
Meaning Algorithms
Information
recency The more recently a page has been accessed, the more likely it will be accessed again LRU
If a page has been accessed many times, It should not be replcaed as it clearly has
frequency LFU
some value
21
Using History : LRU
• Replaces theReference
least-recently-used
Row
page.
0 1 2 0 1 3 0 3 1 2 1
22
Workload Example : The No-Locality
Workload
• Each reference is to a random page within the set of accessed pages.
• Workload accesses 100 unique pages over time.
• Choosing the next page to refer to at random
The No-Locality Workload
100%
80%
60% workload,
OPT
LRU it also doesn’t matter which policy you use.
FIFO
40%
RAND
20%
20 40 60 80 100
Cache Size (Blocks)
23
Workload Example : The 80-20 Workload
• Exhibits locality: 80% of the reference are made to 20% of the page
• The remaining 20% of the reference are made to the remaining 80%
of the pages.
The 80-20 Workload
100%
80%
LRU is more likely to
hold onto the hot pages.
Hit Rate
60%
OPT
LRU
FIFO
40%
RAND
20%
20 40 60 80 100
Cache Size (Blocks)
24
Workload Example : The Looping Sequential
• Refer to 50 pages in sequence.
• Starting at 0, then 1, … up to page 49, and then we Loop, repeating those
accesses, for total of 10,000 accesses to 50 unique pages.
The Looping-Sequential Workload
100%
80%
Hit Rate
60%
OPT
LRU
FIFO
40%
RAND
20%
20 40 60 80 100
Cache Size (Blocks)
25
Implementing Historical Algorithms
• To keep track of which pages have been least-and-recently used, the
system has to do some accounting work on every memory reference.
• Add a little bit of hardware support.
26
Approximating LRU
• Require some hardware support, in the form of a use bit
• Whenever a page is referenced, the use bit is set by hardware to 1.
• Hardware never clears the bit, though; that is the responsibility of the OS
• Clock Algorithm
• All pages of the system arranges in a circular list.
• A clock hand points to some particular page to begin with.
27
Clock Algorithm
• The algorithm continues until it finds a use bit that is set to 0.
A
H B
Use bit Meaning
G C 0 Evict the page
1 Clear Use bit and advance hand
F D
E
The Clock page replacement algorithm
When a page fault occurs, the page the hand is pointing to is inspected. The action taken
depends on the Use bit
28
Workload with Clock Algorithm
• Clock algorithm doesn’t do as well as perfect LRU, it does better then
approach that don’t consider history at all.
The 80-20 Workload
100%
80%
Hit Rate
60%
OPT
LRU
Clock
40%
FIFO
RAND
20%
20 40 60 80 100
Cache Size (Blocks)
29
Considering Dirty Pages
• The hardware include a modified bit (a.k.a dirty bit)
• Page has been modified and is thus dirty, it must be written back to disk to
evict it.
• Page has not been modified, the eviction is free.
30
Page Selection Policy
• The OS has to decide when to bring a page into memory.
• Presents the OS with some different options.
31
Prefetching
• The OS guess that a page is about to be used, and thus bring it in
ahead of time.
Page 1 is brought into memory
Page n
Page 3
Page 4
Page 5
…
Physical Memory
Page 1
Page 2
Page 3
Page 4
…
Secondary
Storage
Page 2 likely soon be accessed and
thus should be brought into memory too
32
Clustering, Grouping
• Collect a number of pending writes together in memory and write
them to disk in one write.
• Perform a single large write more efficiently than many small ones.
Pending writes
Page n
Page 1
Page 2
Page 3
Page 4
Page 5
…
Physical Memory
write in one write
Page 1
Page 2
Page 3
Page 4
…
Secondary
Storage
33
Thrashing
• Memory is oversubscribed and the memory demands of the set of
running processes exceeds the available physical memory.
• Decide not to run a subset of processes.
• Reduced set of processes working sets fit in memory.
CPU
Utilization
Trashing
Degree of multiprogramming
34