0% found this document useful (0 votes)
88 views

Computer Org and Arch: R.Magesh

Computer Org and Arch lecture notes discuss cache memory organization and performance. Key points include: 1) Cache access time is the sum of hit times for each cache level. Miss rates for higher levels are affected by lower levels. 2) Program execution time is affected by cache miss rate and penalty. Memory stalls per instruction depend on miss rate, penalty, and memory accesses per instruction. 3) Cache mapping techniques include direct, associative, and set associative. Mapping determines where a block can be placed in cache.

Uploaded by

mage9999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
88 views

Computer Org and Arch: R.Magesh

Computer Org and Arch lecture notes discuss cache memory organization and performance. Key points include: 1) Cache access time is the sum of hit times for each cache level. Miss rates for higher levels are affected by lower levels. 2) Program execution time is affected by cache miss rate and penalty. Memory stalls per instruction depend on miss rate, penalty, and memory accesses per instruction. 3) Cache mapping techniques include direct, associative, and set associative. Mapping determines where a block can be placed in cache.

Uploaded by

mage9999
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

Computer Org and Arch

BITS Pilani
Pilani Campus R.Magesh
BITS Pilani
Pilani Campus

Contact Session4
30-01-2021
Cache Memory

Access time ti=τ1+τ2+…


Eff Time teff= ∑mihiti
Missbefore level i= (1-h1)(1-h2)..(1-hi-1)
Cache contd
• Program execution time= Ic*T*( cpi (exec)+ Mem stalls/instn).
• Mem stalls per instn = miss rate*miss penalty*mem
access/instn.
Problem 1
• CPI=1.2 ,miss rate=0.5%,block size B=16W.
Mem access/instn=1 , miss penalty =?
assume bus cycle=CPU cycle.
• Data/address transfer time= 1 cycle
• Memory latency=10 cycles.
• Find CPI Effective for A) serial Mode memory, B) Parallel mode
memory C) Interleaved Memory.
Cache prob1
• A) miss penalty for serial mode =16*( 1+10 +1)=
192 cycles.
• B) parallel mode : ( assume memory is 4w wide
and bus is also 4 w wide)= 4*(1+10+1)=48.
• C) interleaved memory ( paged dram) : memory
is 4 w wide but bus is 1 w wide.
= 4( 1+10+4)=60 .
• Cpi eff for a) 2.16, b)1.44 ,c) 1.50
Cache Prob2
• Multi level cache example
CPI with no miss=1.0; clock=500 MHZ.
Main Memory Access time=200 ns.
Miss rate=5%.
Adding L2 cache(20 ns) reduces miss to 2%.
Find Performance Improvement.
Miss Penalty (mem)=200/2=100 cycles.
Effective CPI=with L1= 1+5%*100= 6 Cycles.
Miss penalty (l2)=20ns/2=10 cycles.
• Total CPI=Base CPI + stalls due to L1 miss
+ stalls due to L2 Miss .
1.0+ 5%*10 +2%*100= 3.5 cycles.
• Performance ratio= 6/3.5=1.7.
• Here note that the miss rate of 2% represents
the miss rate of combined L1-L2 Cache.
Cache Memory Comparison
• Cache Comparison Example.
• Cache mapping blockSize Imiss Dmiss CPI Eff
1 Direct 1word 4% 8% 2.0
2 Direct 4 word 2% 5% ?
3 2way SA 4 word 2% 4% ?
Miss Penalty= 6+ block size.
50% instn has data reference.
Find CPI Effective for the 3 cases.
Cache Comparison
• Stall cycles
Cache1: 7*(.04 + .08*.5) =0.56
Cache2: 10* (.02+.05*.5)=0.45
Cache3: 10*(0.02+0.04*.5)=0.4
Hence CPI eff=BaseCPI + Stallcycles.
BaseCPI=2-0.56=1.44
hence CPIEff for 2= 1.44+0.45=1.89
CPIeff for 3 = 1.44+0.4=1.84
contd
• Miss penalty = 6 + blocksize.
• Assume 50% instns have data references.
• Ans) 1.89, 1.84.
• A processor employs a three level memory
hierarchy system. If a referenced word is in cache,
10 ns are required to access , while if it is in Dram,
but not in cache, 100ns are required to access it,
and for disk access of the word, it takes 10ms,
followed by 100ns to copy the word into the cache
and then reference is started again. If the cache hit
ratio is 0.95and main memory hit ratio is 0.6,
compute the average access time to access a
referenced word.
• Compute the number of clock cycles necessary to evaluate the statement: a = a + b + a * c ; on an 8
bit stack machine, with 16 bit addresses. Assume that every memory access takes two clock cycles
and that the execution phase of any instruction take only one clock cycle; assume also that
addresses are absolute.
• Answer:
• PUSH a # 7 clock cycles
• PUSH c # 7
• MUL # 4
• PUSH b # 7
• ADD # 4
• PUSH a # 7
• ADD # 4
• POP a # 7
•  
• if we assume that each variable in memory is addressed by 16 bit word absolute address. then
each word access from memory takes 2 clocks as per data.
• If decoding the opcode is taken at 1 clock cycle. ( assuming 1 at default )
• which means Push will take
• then total clocks for Push is 2 + 1 + 2 *1 +2 =7 clocks.
• 2 for opcode fetch, 1 for decode, 2 for operand Fetch ( 16 bit address) and 2 for operand store
• total for 8 instructions is 7*5+ 4*3=47 clocks.
• A 4 way set Associative cache memory in MERS X486 computer
system has 4 words in each set. A replacement procedure based on
the LRU algorithm is implemented by means of 2 bit counters
associated with each word in the set. A value in the range 0 to 3 is
thus recorded for each word. When a hit occurs , the counter is
associated with the referenced word is set to 0., those counters with
values originally lower than referenced one are incremented by 1 ,
and all others remain unchanged. If a miss occurs , the word with
counter value 3 is replaced with the new word resetting the counter
to 0, and other three counters are incremented by 1.
Show that this procedure works for the following sequence of word
reference A,B,C,D,B,E,D,A,C,E,C,E ( Start with A ,B,C,D as the initial
four words with A being the least recently used.
Cache/Main Memory Structure
Cache operation – overview
• CPU requests contents of memory location
• Check cache for this data
• If present, get from cache (fast)
• If not present, read required block from main
memory to cache
• Then deliver from cache to CPU
• Cache includes tags to identify which block of
main memory is in each cache slot
Cache Read Operation - Flowchart
Cache Design
• Addressing
• Size
• Mapping Function
• Replacement Algorithm
• Write Policy
• Block Size
• Number of Caches
Cache Addressing
• Where does cache sit?
– Between processor and virtual memory management unit
– Between MMU and main memory
• Logical cache (virtual cache) stores data using virtual
addresses
– Processor accesses cache directly, not thorough physical cache
– Cache access faster, before MMU address translation
– Virtual addresses use same address space for different applications
• Must flush cache on each context switch
• Physical cache stores data using main memory physical
addresses
Size does matter
• Cost
– More cache is expensive
• Speed
– More cache is faster (up to a point)
– Checking cache for data takes time
Typical Cache Organization
Mapping Function
• Cache of 64kByte
• Cache block of 4 bytes
– i.e. cache is 16k (214) lines of 4 bytes
• 16MBytes main memory
• 24 bit address
– (224=16M)
Direct Mapping
• Each block of main memory maps to only one
cache line
– i.e. if a block is in cache, it must be in one specific place
• Address is in two parts
• Least Significant w bits identify unique word
• Most Significant s bits specify one memory block
• The MSBs are split into a cache line field r and a
tag of s-r (most significant)
Direct Mapping
Address Structure

Tag s-r Line or Slot r Word w

8 14 2

• 24 bit address
• 2 bit word identifier (4 byte block)
• 22 bit block identifier
– 8 bit tag (=22-14)
– 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag
Direct Mapping from Cache to Main Memory
Direct Mapping
Cache Line Table

Cache line Main Memory blocks held


0 0, m, 2m, 3m…2s-m

1 1,m+1, 2m+1…2s-m+1


m-1 m-1, 2m-1,3m-1…2s-1
Direct Mapping Cache Organization
Direct Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words or
bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+ w/2w
= 2s
• Number of lines in cache = m = 2r
• Size of tag = (s – r) bits
Direct Mapping pros & cons
• Simple
• Inexpensive
• Fixed location for given block
– If a program accesses 2 blocks that map to the
same line repeatedly, cache misses are very high
Associative Mapping
• A main memory block can load into any line of
cache
• Memory address is interpreted as tag and
word
• Tag uniquely identifies block of memory
• Every line’s tag is examined for a match
• Cache searching gets expensive
Associative Mapping from
Cache to Main Memory
Fully Associative Cache Organization
Associative Mapping
Address Structure
Word
Tag 22 bit 2 bit

• 22 bit tag stored with each 32 bit block of data


• Compare tag field with tag entry in cache to check for hit
• Least significant 2 bits of address identify which 16 bit word is
required from 32 bit data block
• e.g.
– Address Tag Data Cache line
– FFFFFC FFFFFC 24682468 3FFF
Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words or
bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+ w/2w
= 2s
• Number of lines in cache = undetermined
• Size of tag = s bits
Set Associative Mapping
• Cache is divided into a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given set
– e.g. Block B can be in any line of set i
• e.g. 2 lines per set
– 2 way associative mapping
– A given block can be in one of 2 lines in only one
set
Set Associative Mapping
Example
• 13 bit set number
• Block number in main memory is modulo 213
• 000000, 00A000, 00B000, 00C000 … map to
same set
Mapping From Main Memory to Cache:
v Associative
Mapping From Main Memory to Cache:
k-way Associative
K-Way Set Associative Cache Organization
Set Associative Mapping
Address Structure
Word
Tag 9 bit Set 13 bit 2 bit

• Use set field to determine cache set to look in


• Compare tag field to see if we have a hit
• e.g
– Address Tag Data Set number
– 1FF 7FFC 1FF 12345678 1FFF
– 001 7FFC 001 11223344 1FFF
Set Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2d
• Number of lines in set = k
• Number of sets = v = 2d
• Number of lines in cache = kv = k * 2d
• Size of tag = (s – d) bits
Direct and Set Associative Cache
Performance Differences

• Significant up to at least 64kB for 2-way


• Difference between 2-way and 4-way at 4kB
much less than 4kB to 8kB
• Cache complexity increases with associativity
• Not justified against increasing cache to 8kB or
16kB
• Above 32kB gives no improvement
• (simulation results)
Figure 4.16
1.0
Varying Associativity over Cache Size
0.9
0.8
0.7
Hit ratio

0.6
0.5
0.4
0.3
0.2
0.1
0.0
1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M
Cache size (bytes)
direct
2-way
4-way
8-way
16-way
Replacement Algorithms (1)
Direct mapping
• No choice
• Each block only maps to one line
• Replace that line
Replacement Algorithms (2)
Associative & Set Associative
• Hardware implemented algorithm (speed)
• Least Recently used (LRU)
• e.g. in 2 way set associative
– Which of the 2 block is lru?
• First in first out (FIFO)
– replace block that has been in cache longest
• Least frequently used
– replace block which has had fewest hits
• Random
Write Policy
• Must not overwrite a cache block unless main
memory is up to date
• Multiple CPUs may have individual caches
• I/O may address main memory directly
Write through
• All writes go to main memory as well as cache
• Multiple CPUs can monitor main memory
traffic to keep local (to CPU) cache up to date
• Lots of traffic
• Slows down writes

• Remember bogus write through caches!


Write back
• Updates initially made in cache only
• Update bit for cache slot is set when update
occurs
• If block is to be replaced, write to main
memory only if update bit is set
• Other caches get out of sync
• I/O must access main memory through cache
• N.B. 15% of memory references are writes
Line Size
• Retrieve not only desired word but a number of adjacent
words as well
• Increased block size will increase hit ratio at first
– the principle of locality
• Hit ratio will decreases as block becomes even bigger
– Probability of using newly fetched information becomes less than
probability of reusing replaced
• Larger blocks
– Reduce number of blocks that fit in cache
– Data overwritten shortly after being fetched
– Each additional word is less local so less likely to be needed
• No definitive optimum value has been found
• 8 to 64 bytes seems reasonable
• For HPC systems, 64- and 128-byte most common

You might also like