0% found this document useful (0 votes)
26 views

Mapping Functions

Mapping functions COA

Uploaded by

aljufmuhammad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Mapping Functions

Mapping functions COA

Uploaded by

aljufmuhammad
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Mapping Functions

The transformation of data from main memory to cache memory is referred to as a mapping
process. Three types of mapping procedures are there when considering the organization of
cache memory. They are

• Direct mapping

• Associative mapping

• Set-associative mapping

Direct mapping

Consider a cache consisting of 128 blocks of 16 words each and a main memory consisting
of 4096 blocks of 16 words each. Assume that the main memory is addressable by a 16-bit
address. The simplest way to associate main memory blocks with cache block is direct
mapping. In this technique block K of the main memory maps on to block K modulo 128 of
the cache. Thus whenever one of the main memory blocks 0, 128, 256 ….. is loaded in the
cache, it is stored in the cache block 0. Blocks 1, 129, 257…. are stored in Block 1 in cache
and so on. Since more than 1 main memory block is mapped onto a given cash block position,
contention may arise for that position even when the cache is not full. For example,
instructions of a program may start in block 1 and continue in block 129 possibly after a
branch. As this program is executed, both of these blocks must be transferred to the block 1
position in the cache. contention is resolved by allowing the new block to overwrite the
currently resident block.
Placement of a block in the cache is determined from the memory address. The memory
address can be divided into three fields. The low order 4 bits select one of the 16 words in a
block. When a new block enters the cache, the 7bit cache block field determines the cache
position in which this block must be stored. The high order 5 bits of the memory address of
the block are stored in 5 tag bits associated with its location in the cache. As execution
proceeds, the 7-bit cache block fields of each address generated by the CPU points to a
particular block location in the cache. The high order 5 bits of the address are compared with
the tag bits associated with that cache location. If they match, then the desired word is in that
block of the cache. If there is no match, then the block containing the required word must
first be read from the main memory and loaded into the cache. The direct mapping technique
is easy to implement but it is not flexible.

Figure: Direct mapped cache


Associative mapping

Associative mapping is more flexible mapping technique. In this technique a main memory
block can be placed into any cache block position. In this case 12 tag bits are required to
identify the memory block when it is resident in the cache. The tag bits of an address from
the CPU are compared to tag bits of each block of the cache to see if the desired block is
present. Associative mapping technique gives complete freedom in choosing the cache
location in which the to place the memory block. Thus the space in the cache can be used
more efficiently. A new block that has to be brought into the cache has to replace an existing
block only if the cache is full. In this case we need an algorithm to select a block to be
replaced. The cost of an associative cache is higher than the cost of a direct mapped cache
because of the need to search all 128 tag patterns to determine whether a given block is in
the cache. A search of this kind is called an associative search.

Figure: Associative mapped cache

Set associative mapping

This technique is a combination of the direct and associative mapping techniques. Blocks of
the cache are grouped into sets and the mapping allows a block of the main memory to reside
in any block of a specific set. Hence the contention problem of the direct method is eased by
having a few choices for block placement. At the same time the hardware cost is reduced by
decreasing the size of the associative search. An example of this set associative technique is
shown in figure for a cache with two blocks per set. In this case memory blocks 0, 64, 128….
4032 map on to cache set 0 and they can occupy either of the two block positions within this
set. Having 64 sets means that the 6 bit set field of the address determines which set of the
cache might contain the desired block. The tag field of the address must then be associatively
compared to the tags of the two blocks of the set to check if the desired block is present.

Figure: Set associative mapped cache.

Cache initialization

The cache is initialized when power is applied to the computer or when the main memory is
loaded with a complete set of programs from auxiliary memory. After initialization the cache
is considered to be empty, but in effect it contains some nonvalid data. It is customary to
include with each word in cache a valid bit to indicate whether or not the word contains valid
data.
The cache is initialized by clearing all the valid bits to 0. The valid bit of a particular cache
word is set to 1 the first time this word is loaded from main memory and stays set unless the
cache has to be initialized again. The introduction of the valid bit means that a word in cache
is not replaced by another word unless the valid bit is set to 1 and a mismatch of tags occurs.
If the valid bit happens to be 0, the new word automatically replaces the invalid data. Thus
the initialization condition has the effect of forcing misses from the cache until it fills with
valid data.

You might also like