0% found this document useful (0 votes)
243 views16 pages

Multilevel Caches and Replacement Policies

Caches are small, fast memories that store frequently accessed blocks of main memory to reduce access time. Multi-level caches have multiple cache levels with different speeds - level 1 (L1) cache is fastest on the processor, level 2 (L2) cache is slower outside the processor. When accessing data, the processor first checks L1, then L2, and finally main memory if both caches miss. Caches use replacement policies like least recently used (LRU) to determine which cached block to replace when bringing in a new block from main memory.

Uploaded by

K.R.Raguram
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
243 views16 pages

Multilevel Caches and Replacement Policies

Caches are small, fast memories that store frequently accessed blocks of main memory to reduce access time. Multi-level caches have multiple cache levels with different speeds - level 1 (L1) cache is fastest on the processor, level 2 (L2) cache is slower outside the processor. When accessing data, the processor first checks L1, then L2, and finally main memory if both caches miss. Caches use replacement policies like least recently used (LRU) to determine which cached block to replace when bringing in a new block from main memory.

Uploaded by

K.R.Raguram
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 16

Multi Level caches & Replacement Policies

15/08/2012

CA & PP assignment presentation

Cache memory
Cache memories are small, fast SRAM-based memories managed automatically in hardware. Hold frequently accessed blocks of main memory Small amount of fast memory Sits between normal main memory and CPU May be located on CPU chip or module

15/08/2012

CA & PP assignment presentation

Multiple-Level Caches
More levels in the memory hierarchy Can have two levels of cache The Level-1 cache (or L1 cache, or internal cache) is smaller and faster, and lies in the processor next to the CPU. The Level-2 cache (or L2 cache, or external cache) is larger but slower, and lies outside the processor. Memory access first goes to the L1 cache. If L1 cache access is a miss, go to L2 cache. If L2 cache is a miss, go to main memory. If main memory is a miss, go to virtual memory on hard disk.
15/08/2012 CA & PP assignment presentation 3

Multilevel Caches
Small, fast Level 1 (L1) cache Often on-chip for speed and bandwidth Larger, slower Level 2 (L2) cache Closely coupled to CPU; may be on-chip, or nearby on module

15/08/2012

CA & PP assignment presentation

Typical bus architecture

15/08/2012

CA & PP assignment presentation

Multi-Level Caches
Options: separate data and instruction caches, or a unified cache

15/08/2012

CA & PP assignment presentation

Instruction and Data Caches


Can either have separate Instruction Cache and Data Cache, or have one unified cache. Advantage of separate cache: Can access Instruction Cache and Data Cache simultaneously in the same cycle, as required by a pipelined data path Advantage of unified cache: More flexible, so may have a higher hit rate

15/08/2012

CA & PP assignment presentation

Cache Performance Metrics


Miss Rate
Fraction of memory references not found in cache (misses/references) Typical numbers: 3-10% for L1 can be quite small (e.g., < 1%) for L2, depending on size, etc.

Hit Time
Time to deliver a line in the cache to the processor (includes time to determine whether the line is in the cache) Typical numbers: 1 clock cycle for L1 3-8 clock cycles for L2

Miss Penalty
Additional time required because of a miss Typically 25-100 cycles for main memory
15/08/2012 CA & PP assignment presentation 8

Cache Memory Design Parameters


Cache size (in bytes or words). A larger cache can hold more of the programs useful data but is more costly and likely to be slower. Block or cache-line size (unit of data transfer between cache and main). With a larger cache line, more data is brought in cache with each miss. This can improve the hit rate but also may bring low-utility data in.
Placement policy. Determining where an incoming cache line is stored. More flexible policies imply higher hardware cost and may or may not have performance benefits (due to more complex data location).
15/08/2012 CA & PP assignment presentation 9

Cache Memory Design Parameters Cont


Replacement policy. Determining which of several existing cache blocks (into which a new cache line can be mapped) should be overwritten. Typical policies: choosing a random or the least recently used block.

Write policy. Determining if updates to cache words are immediately forwarded to main (write-through) or modified blocks are copied back to main if and when they must be replaced (write-back or copy-back).

15/08/2012

CA & PP assignment presentation

10

Cache Memory: Replacement Policy


When a MM block needs to be brought in while all the CM blocks are occupied, one of them has to be replaced. The selection of this block to be replaced can be determined in one of the following ways.

1) Optimal Replacement 2) Random selection 3) FIFO (first-in first-out) 4) LRU

15/08/2012

CA & PP assignment presentation

11

Optimal Replacement
Replace the block which is no longer needed in the future. If all blocks currently in CM will be used again, replace the one which will not be used in the future for the longest time.

The optimal replacement is obviously the best but is not realistic, simply because when a block will be needed in the future is usually not known ahead of time

15/08/2012

CA & PP assignment presentation

12

Random selection
Replace a randomly selected block among all blocks currently in CM. It only requires a random or pseudo-random number generator
New block Old block (chosen at random)

Random policy:

15/08/2012

CA & PP assignment presentation

13

FIFO
Replace the block that has been in CM for the longest time FIFO strategy just requires a queue Q to store references to the pages in the cache
New block Old block(present longest)

FIFO policy:

Insert time: 8:00 am 7:48am 9:05am 7:10am 7:30 am 10:10am 8:45am

15/08/2012

CA & PP assignment presentation

14

LRU
Replace the block in CM that has not been used for the longest time, i.e., the least recently used (LRU) block. Implementing the LRU strategy requires the use of a priority queue Q
New block Old block(least recently used)

LRU policy:

last used:

7:25am 8:12am 9:22am 6:50am 8:20am 10:02am 9:50am

15/08/2012

CA & PP assignment presentation

15

THANK YOU

15/08/2012

CA & PP assignment presentation

16

You might also like