0% found this document useful (0 votes)
2 views

chapter 5

The document discusses cache memory principles, including cache read operations, caching in a memory hierarchy, and various cache design elements such as mapping functions and replacement algorithms. It outlines the differences between direct, associative, and set associative mapping, as well as write policies like write through and write back. Additionally, it highlights the importance of managing cache hits and misses in relation to main memory access and virtual memory address translation.

Uploaded by

ahmed.waasel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

chapter 5

The document discusses cache memory principles, including cache read operations, caching in a memory hierarchy, and various cache design elements such as mapping functions and replacement algorithms. It outlines the differences between direct, associative, and set associative mapping, as well as write policies like write through and write back. Additionally, it highlights the importance of managing cache hits and misses in relation to main memory access and virtual memory address translation.

Uploaded by

ahmed.waasel
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Ch5

Cache Memory Principles


Cache/Main Memory Structure
Cache Read Operation
The processor generates the read address (RA) of a word to be read.
If the word is contained in the cache, it is delivered to the processor.
Otherwise, the block containing that word is loaded into the
cache, and the word is delivered to the processor. .

Miss
The figure
Hit below shows
these last two
operations
occurring in
parallel.
Caching in a Memory Hierarchy
Smaller, faster, more expensive
device at level k caches a
Level k: 48 9 14
10 3
subset of the blocks from level k+1

Data is copied between


10
4 levels in block-sized transfer units

0 1 2 3

4 5 6 7 Larger, slower, cheaper storage


Level k+1:
device at level k+1 is partitioned
8 9 10 11 into blocks.
12 13 14 15
General Caching Concepts
Program needs object d, which is
14
12 Request stored in some block b
12
14
0 1 2 3
Cache hit
Level 4*
12 9 14 3
◦ Program finds b in the cache at level k.
k: E.g., block 14

Request
Cache miss
12
4*
12 ◦ b is not at level k, so level k cache must
fetch it from level k+1. E.g., block
12
0 1 2 3 ◦ If level k cache is full, then some current
block must be replaced (evicted). Which
Level 44* 5 6 7
one is the “victim”?
k+1: 8 9 10 11 ◦ Placement policy: where can the new block go?
12 13 14 15 E.g., b mod 4
◦ Replacement policy: which block should be
evicted? E.g., LRU
Typical Cache Organization
miss
In this organization,
the cache connects
to the processor via
data, control, and
address lines.
hit
The data and
address lines also
attach to data and
address buffers,
which attach to a
system bus from
which the main
memory is reached.
ELEMENTS OF CACHE DESIGN

Elements of Cache Design


Cache Addresses Write Policy
Logical Write through
Physical Write back
Cache Size Line Size
Mapping Function Number of caches
Direct Single or two level
Associative Unified or split
Set Associative
Replacement Algorithm
Least recently used (LRU)
First in first out (FIFO)
Least frequently used (LFU)
Random
A System with Virtual Memory

Address Translation: The hardware converts virtual addresses into physical


addresses via an OS-managed lookup table (page table)
For reads to and writes from main memory, a hardware memory management unit
(MMU) translates each virtual address into a physical address in main memory.
Direct Mapping

• The simplest technique


• Maps each block of main memory into only one possible cache line
• Many to one.
Associative Mapping

• Permits each main memory block to be loaded into any line of the cache
• The cache control logic interprets a memory address simply as a Tag and a Word field
• To determine whether a block is in the cache, the cache control logic must
simultaneously examine every line’s Tag for a match .
• any to any
Set Associative

Associative Mapping
A compromise that exhibits the
strengths of both the direct and
associative approaches while
Direct Mapping reducing their disadvantages
The most common replacement
algorithms are:
Least recently used (LRU)
• Most effective
• Replace that block in the set that has been in the cache longest with no
reference to it
• Because of its simplicity of implementation, LRU is the most popular
replacement algorithm
First-in-first-out (FIFO)
• Replace that block in the set that has been in the cache longest.
• Easily implemented as a round-robin or circular buffer technique
Least frequently used (LFU)
• Replace that block in the set that has experienced the fewest
references
• Could be implemented by associating a counter with each line.
ELEMENTS OF CACHE DESIGN

Write Policy

There are two problems to contend with:

More than one device may have access to main memory

A more complex problem occurs when multiple


processors are attached to the same bus and each processor
has its own local cache - if a word is altered(changed) in
one cache it could conceivably invalidate a word in other
caches
Write Through
◦Simplest technique
◦All write operations are made to the main
memory as well as to the cache
◦The main disadvantage of this technique is
that it generates substantial (large) memory
traffic.
Write back
◦Minimizes memory writes
◦Updates are made only in the cache
◦Portions of main memory are invalid and
hence accesses by I/O modules can be
allowed only through the cache
◦This makes for complex circuitry.

You might also like