0% found this document useful (0 votes)
16 views28 pages

DLCA Topic 5-3

This document discusses computer memory hierarchies and cache memory mapping techniques. It describes the need for memory hierarchies to balance the tradeoffs of memory cost, capacity, and access time. Various cache mapping methods like direct mapping, set associative mapping, and fully associative mapping are explained. The document also covers cache coherence techniques like write-through, write-back, and buffered write policies. Associative memory structure and its use for cache tag storage is outlined.

Uploaded by

sashurosh16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views28 pages

DLCA Topic 5-3

This document discusses computer memory hierarchies and cache memory mapping techniques. It describes the need for memory hierarchies to balance the tradeoffs of memory cost, capacity, and access time. Various cache mapping methods like direct mapping, set associative mapping, and fully associative mapping are explained. The document also covers cache coherence techniques like write-through, write-back, and buffered write policies. Associative memory structure and its use for cache tag storage is outlined.

Uploaded by

sashurosh16
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Saturday, October 31, 2020 Computer Architecture by Prof.

Sanjay Naravadkar 2
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 3
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 4
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 5
Physical Types Physical Characteristics

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 6


Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 7
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 8
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 9
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 10
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 11
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 12
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 13
What Memory Hierarchy includes?
• Memory hierarchy includes memory and storage
devices in a proper order inside computer system.
• It ranges from the slowest but high capacity
secondary storage device to the fastest but low
capacity cache memory.
Need of Memory Hierarchy:
There is a trade-off amongst following characteristics
of memory:
1. Cost
2. Capacity
3. Access time
Memory hierarchy is used to balance this trade-off

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 14


Average Access time (tA) of Memory System:
• Let 'H' be the 'Hit ratio' i. e. the probability
to access the information from the cache
memory with access time 'tA1'
• Hence, (1 - H) is the 'Miss ratio' (probability
to get the information from Main Memory
(missed from cache) with access time 'tA2'
• Hence, Average Access time (tA)
Average cost (C) of Memory System: tA = H.tA1 + (1 - H).tA2
Cost of Cache Memory = C1.S1 Objective of Memory Hierarchy:
Cost of Main Memory = C2.S2
To have average cost/bit near to that of Main
Total cost of entire memory system Memory & average access time near to that
= C1.S1 + C2.S2 of Cache Memory
Total size of memory system = S1 + S2 As H → 1, tA → tA1
Hence, Average Cost/bit (C) and if S2 >> S1, then, C ≈ C2
C = (C1.S1 + C2.S2)/(S1 + S2)

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 15


Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 16
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 17
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 18
• In 'Cache Miss', the MM block containing that data is searched and is
copied into the Cache line
• Mapping defines the cache line, where the incoming MM block is to be
copied
Various mapping methods are:
1. Direct Mapping
2. 2-way Set Associative Mapping
3. Fully Associative Mapping
There can be 4-way/8-way Set Associative mapping possible, but such
systems are rarely used

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 19


Consider, Cache Memory size = 64 KB = 216 Bytes
Main Memory (MM) size = 16 MB = 224 Bytes
Cache line size = MM block size = 4 Bytes
Size of MM address = 24 bits
In Direct Mapping, cache has 'Single bank' of 64 KB
No. of cache lines = 216/4 = 214
No. of MM blocks = 224/4 = 222

MM address of 24 bits is used as:


LSB 2 bits define byte within cache line
Middle 14 bits identifies 1 particular line out of 214 lines
MSB 8 bits are Tag (directory) bits for the cache line
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 20
Consider, Cache Memory size = 64 KB = 216 Bytes
Main Memory (MM) size = 16 MB = 224 Bytes
Cache line size = MM block size = 4 Bytes
In 2 Way Set Associative Mapping, cache has 'Two banks'
Size of Bank 'A' = Bank 'B' = 32 KB each (32 KB = 215 Bytes)
No. of cache lines per bank = 215/4 = 213
No. of MM blocks = 224/4 = 222

Size of MM address = 24 bits and is used as:


LSB 2 bits define byte within cache line
Middle 13 bits identifies 1 particular line out of 213 lines
MSB 9 bits are Tag (directory) bits for the cache line
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 21
Consider, Cache Memory size = 64 KB = 216 Bytes
Main Memory (MM) size = 16 MB = 224 Bytes
Cache line size = MM block size = 4 Bytes
Size of MM address = 24 bits
In Fully Associative Mapping also, cache memory has
'Single bank' of 64 KB
No. of cache lines = 216/4 = 214
No. of MM blocks = 224/4 = 222

Advantage: More flexible mapping


Main memory address of 24 bits is used as: Drawback: More memory required
LSB 2 bits define byte within cache line for the Tag/directory
MSB 22 bits are Tag (directory) bits for the cache line

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 22


• Maintaining the contents of the Cache memory and Main memory identical is called as
'Cache coherency/consistency'
• Fundamental principle:
“Cache memory should be true subset of Main memory at all the times”
Cache inconsistency is observed in following situation:
1. CPU reads data from main memory and copies it into the cache
2. Within cache, CPU may update/change its contents, without updating the same
location within the Main memory
3. Here, cache location has updated data, but Main memory loacation has earlier data
(thus creating cache inconsistency)
Various Cache update/write policies have been developed to maintain coherency:
• Write through policy
• Buffered write (Delayed/Deferred write)
• Write Back policy
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 23
Write Through policy: Once cache is updated, the corresponding Main memory location is
immediately updated by CPU by running MM write/update cycle
• Always maintains both the memory contents consistent
• Sometimes, CPU defers some very important ongoing task & carries comparativey a
minor updation (reducing processing bandwidth of CPU at that time)
Buffered Write policy: Here, buffer memory is employed to hold the successive cache
memory updations, till CPU is busy performing the important task
• Once CPU becomes free (or when performing less critical task), all the cache updations
recorded in this buffer are transferred to main memory
• Deciding the optimum size of the buffer memory is the design constraint
Write Back policy: CPU performs updations in the cache memory, but these are not
conveyed to main memory immediately
• Main memory is written/updated according to cache memory, only when the cache line
is replaced by another (during cache line replacement)
• No special memory write cycle is needed here
• Most preferred method for maintaining cache coherency
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 24
Structure of Associative memory • It is “Content Addressable Memory” (CAM)
• Data searching is done as per the contents (and not by specifying
the corresponding memory address)
• Used to hold 'Tag/Directory bits' in cache mapping methods
Data write/updation in Associative Memory:
• Input data is taken into the 'Input Register'
• Contents of 'Input Register' are updated/written into the '2-D
Storage Cell Array' of CAM
Data search & read from Associative Memory:
• Contents to be searched are taken into 'Mask Register' as per the
“Key” of searching
• Searching is done associatively within '2-D Storage Cell Array' of
CAM using this “Key”
• If content searching is successful, then 'Match = 1'
1-bit CAM Cell • The Select Logic then sets Select = 1 to enable and copy the record
into 'Output Register' and subsequently on Output line
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 25
Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 26
Suppose, m = 2 and n = 3 (total address of 5 bits):
Size of decoder is 2:4, hence number of MM modules = 4
LSB 3 bit address connected to each MM module
Hence, size of each module = 23 = 8 memory locations
MM Module 1 maps addresses = 00000 to 00111
MM Module 2 maps addresses = 01000 to 01111
MM Module 3 maps addresses = 10000 to 10111
MM Module 4 maps addresses = 11000 to 11111

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 27


Suppose, m = 2 and n = 3 (total address of 5 bits):
Size of decoder is 2:4, hence number of MM modules = 4
MSB 3 bit address connected to each MM module
Hence, size of each module = 23 = 8 memory locations
1st address = 00000, MM Module 1 selected (address 000 within it)
2nd address = 00001, MM Module 2 selected (address 000 within it)
3rd address = 00010, MM Module 3 selected (address 000 within it)
4th address = 00011, MM Module 4 selected (address 000 within it)
5th address = 00100, MM Module 1 selected (now address 001)

Saturday, October 31, 2020 Computer Architecture by Prof. Sanjay Naravadkar 28

You might also like