Multilevel Caches and Replacement Policies
Multilevel Caches and Replacement Policies
15/08/2012
Cache memory
Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module
15/08/2012
Multiple-Level Caches
More levels in the memory hierarchy Can have two levels of cache The Level-1 cache (or L1 cache, or internal cache) is smaller and faster, and lies in the processor next to the CPU. The Level-2 cache (or L2 cache, or external cache) is larger but slower, and lies outside the processor. Memory access first goes to the L1 cache. If L1 cache access is a miss, go to L2 cache. If L2 cache is a miss, go to main memory. If main memory is a miss, go to virtual memory on hard disk.
15/08/2012 CA & PP assignment presentation 3
Multilevel Caches
Small, fast Level 1 (L1) cache Often on-chip for speed and bandwidth Larger, slower Level 2 (L2) cache Closely coupled to CPU; may be on-chip, or nearby on module
15/08/2012
15/08/2012
Multi-Level Caches
Options: separate data and instruction caches, or a unified cache
15/08/2012
15/08/2012
Hit Time
Time to deliver a line in the cache to the processor (includes time to determine whether the line is in the cache) Typical numbers: 1 clock cycle for L1 3-8 clock cycles for L2
Miss Penalty
Additional time required because of a miss Typically 25-100 cycles for main memory
15/08/2012 CA & PP assignment presentation 8
Write policy. Determining if updates to cache words are immediately forwarded to main (write-through) or modified blocks are copied back to main if and when they must be replaced (write-back or copy-back).
15/08/2012
10
15/08/2012
11
Optimal Replacement
Replace the block which is no longer needed in the future. If all blocks currently in CM will be used again, replace the one which will not be used in the future for the longest time.
The optimal replacement is obviously the best but is not realistic, simply because when a block will be needed in the future is usually not known ahead of time
15/08/2012
12
Random selection
Replace a randomly selected block among all blocks currently in CM. It only requires a random or pseudo-random number generator
New block Old block (chosen at random)
Random policy:
15/08/2012
13
FIFO
Replace the block that has been in CM for the longest time FIFO strategy just requires a queue Q to store references to the pages in the cache
New block Old block(present longest)
FIFO policy:
15/08/2012
14
LRU
Replace the block in CM that has not been used for the longest time, i.e., the least recently used (LRU) block. Implementing the LRU strategy requires the use of a priority queue Q
New block Old block(least recently used)
LRU policy:
last used:
15/08/2012
15
THANK YOU
15/08/2012
16