0% found this document useful (0 votes)
333 views

Lecture 6 - Address Mapping & Replacement

The document discusses different cache mapping techniques including direct mapping, set-associative mapping, and fully associative mapping. It explains how each technique maps cache blocks to lines using the tag and index fields of the memory address. Direct mapping maps each block to one line, set-associative mapping maps blocks to sets with multiple lines, and fully associative allows blocks to map to any open line. The document also briefly introduces the topic of cache coherency between caches in multiprocessing systems.

Uploaded by

Kartik Kundal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
333 views

Lecture 6 - Address Mapping & Replacement

The document discusses different cache mapping techniques including direct mapping, set-associative mapping, and fully associative mapping. It explains how each technique maps cache blocks to lines using the tag and index fields of the memory address. Direct mapping maps each block to one line, set-associative mapping maps blocks to sets with multiple lines, and fully associative allows blocks to map to any open line. The document also briefly introduces the topic of cache coherency between caches in multiprocessing systems.

Uploaded by

Kartik Kundal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Address Mapping

&
Replacement
Commonly used methods:
➢ Direct-Mapped Cache
➢ Associative Mapped Cache
➢ Set-Associative Mapped Cache
➢ Each block of main memory maps to only one cache
line
➢ i.e. if a block is in cache, it must be in one specific
place
➢ Address is in two parts
➢ Least Significant w bits identify unique word
➢ Most Significant s bits specify one memory block
➢ The MSBs are split into a cache line field r and a tag of
s-r (most significant)
Tag s-r Line or Slot r Word w
8 14 2

❑ 24 bit address
❑ 2 bit word identifier (4 byte block)
❑ 22 bit block identifier
❑ 8 bit tag (=22-14)
❑ 14 bit slot or line
❑ No two blocks in the same line have the same Tag field
❑ Check contents of cache by finding line and checking Tag
➢The tag memory is much smaller than in associative
mapped cache.
➢No need for an associative search, since the slot
field is used to direct the comparison to a single
field.
➢Consider what happens when a program references
locations that are 219 words apart, which is the size
of the cache. Every memory reference will result in
a miss, which will cause an entire block to be read
into the cache even though only a single word is
used.
➢ Address length = (s + w) bits
➢Number of addressable units = 2s+w words or
bytes
➢ Block size = line size = 2w words or bytes
➢ Number of lines in cache = m = 2r
➢ Size of tag = (s – r) bits
➢A main memory block can load into any line of
cache
➢Memory address is interpreted as tag and
word
➢ Tag uniquely identifies block of memory
➢ Every line’s tag is examined for a match
➢ Cache searching gets expensive
Word
Tag 22 bit 2 bit
➢ 22 bit tag stored with each 32 bit block of data
➢ Compare tag field with tag entry in cache to check for hit
➢ Least significant 2 bits of address identify which 16 bit
word is required from 32 bit data block
➢ e.g.
Address Tag Data Cache line
FFFFFC FFFFFC 24682468 3FFF
➢Any main memory block can be placed into any
cache slot.
➢Regardless of how irregular the data and program
references are, if a slot is available for the block, it
can be stored in the cache.
➢Considerable hardware overhead needed for cache
bookkeeping.
➢There must be a mechanism for searching the tag
memory in parallel.
➢ Address length = (s + w) bits
➢Number of addressable units = 2s+w words or
bytes
➢ Block size = line size = 2w words or bytes
➢ Number of lines in cache = undetermined
➢ Size of tag = s bits
➢ Cache is divided into a number of sets
➢ Each set contains a number of lines
➢ A given block maps to any line in a given set
➢ e.g. Block B can be in any line of set i
➢ 2 way associative mapping
➢A given block can be in one of 2 lines in only
one set
➢In our example the tag memory increases only
slightly from the direct mapping and only two tags
need to be searched for each memory reference.
➢The set-associative cache is widely used in today’s
microprocessors.
➢ Address length = (s + w) bits
➢Number of addressable units = 2s+w words or
bytes
➢ Block size = line size = 2w words or bytes
➢ Number of blocks in main memory = 2d
➢ Number of lines in set = k
➢ Number of sets = v = 2d
➢ Size of tag = (s – d) bits
➢The synchronization of data in multiple caches
such that reading a memory location via any
cache will return the most recent data written to
that location via any (other) cache.

➢Some parallel processors do not provide cache


accesses to shared memory to avoid the issue of
cache coherency.
➢If caches are used with shared memory then
some system is required to detect, when data in
one processor's cache should be discarded or
replaced, because another processor has
updated that memory location. Several such
schemes have been devised.
➢ Introduction to Cache ➢ Mapping Techniques
Memory ✓ Direct Mapping
✓ Definition ✓ Fully Associative Mapping
✓ working
✓ Fully Associative Mapping
✓ Levels
✓ Organization

➢ Cache Coherency
➢ World Wide Web
✓ www.wikipedia.org
✓ www.google.co.in
✓ www.existor.com
✓ www.authorstream.com
✓ www.slideshare.com
✓ www.thinkquest.org

You might also like