0% found this document useful (0 votes)
84 views

03-Chap4-Cache Memory Mapping

This document discusses cache memory and its key design elements. It provides an overview of cache memory principles and characteristics, including cache size, mapping function, replacement algorithms, write policy, line size, and number of caches. It then discusses these design elements in more detail and how they impact cache performance.

Uploaded by

abdul shakoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

03-Chap4-Cache Memory Mapping

This document discusses cache memory and its key design elements. It provides an overview of cache memory principles and characteristics, including cache size, mapping function, replacement algorithms, write policy, line size, and number of caches. It then discusses these design elements in more detail and how they impact cache performance.

Uploaded by

abdul shakoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

CS-213

Computer Organization and


Arichitecture

Chapter 4
Cache Memory

Overview
• Mainly following topics are discussed:
—Characteristics of memory system
—Cache memory principles
—Elements of cache design
– Cache size
– Mapping function
– Replacement algorithms
– Write policy
– Line size
– Number of caches

1
Characteristics
The complex subject of computer memory
is made more easy if we classify memory
system according to their characteristics
—Location
—Capacity
—Unit of transfer
—Access method
—Performance
—Physical type
—Physical characteristics
—Organisation
Let’s discuss them one by one

Location
• Internal on board/on chip, Name them all!
• External accessible via I/O process

2
Capacity

• An obvious characteristic of memory… how


much?

• Usually expressed in terms of Bytes


(byte=8bits) or words (common word
lengths 8, 16 and 32 bits)

Unit of Transfer
• Number of bits read out of or written into
memory at a time
• Internal
— Usually equal to lines in data bus
— Data bus may be equal to word length, but often larger such
as 64, 128, or 128 bits

• External
— Often transferred in larger units - blocks

• Addressable unit
— Smallest location which can be uniquely addressed
— Most contemporary systems the address is at byte level
— If A= number of lines in address bus then 2A=?

• Unit of transfer need not equal a word or


an addressable unit

3
Access Methods (1)
• Sequential
—Start at the beginning and read through in
order
—Access time depends on location of data and
previous location
—e.g. ?
• Direct
—Access is by jumping to vicinity plus
sequential search
—Access time depends on location and previous
location
—e.g. ?

Access Methods (2)


• Random
—Individual addresses identify locations exactly
—Access time is independent of location or
previous access
—e.g. ?
• Associative
—Data is located by a comparison with contents
of a portion of the store
—Access time is independent of location or
previous access
—e.g. ?

4
Memory Hierarchy
• For greater performance, memory must
keep up with the processor
• Memory hierarchy evolved as a solution
• Internal
—Registers, In CPU
—May include one or more levels of cache
—―RAM‖
• External memory
—Hard disks
—Backing store
• Trade-off among three key characteristics
—Capacity, Access time, Cost (Dilemma for designer?)

Memory Hierarchy - Diagram


• The way out of this dilemma is not to rely
on a single memory component but to employ
a hierarchy
On
• Smaller, more expensive, faster memories chip
are supplemented by larger, cheaper, slower On
memories boards

•The key to success is to reduce frequency of


access to memory

• Example: L1 has 1000 words with access


time 0.01 µs and L2 has 100000 words with
access time 0.1 µs. With hit ratio of 95% the
average time:
(0.95) (0.01) + (0.05) (0.01+0.1) = 0.015 µs

5
Performance
• Access time
—Time from the instant address is presented on
the address bus to the instant data is stored
or made available (this is for random access, but what
for non-random?)

• Memory Cycle time


—Time may be required for the memory to
―recover‖ before next access, transients to die out
on signal lines
—Cycle time is access time + some additional time
• Transfer Rate
—Rate at which data can be moved

Physical Types
• Semiconductor
—RAM
• Magnetic
—Disk & Tape
• Optical
—CD & DVD
• Others
—Holographic data storage Holography is the science of
producing holograms; it is an advanced form of photography that
allows an image to be recorded in three dimensions. The technique of
holography can also be used to optically store, retrieve, and process
information.

Assigment 2:What is Bubble data storage and


microfiche?

6
Physical Characteristics
• Volatile memory e.g.?
• Non-Volatile memory e.g.?
• Erasable e.g.?
• Non-Erasable e.g.?
• Power consumption (power needed to retain
contents or not?)

The Bottom Line


• How much?
—Capacity
• How fast?
—Time is money
• How expensive?

7
Hierarchy List
• Registers
• L1 Cache
• L2 Cache
• Main memory
• Disk cache (A portion reserved in memory can be used as
buffer to hold data temp. Adv: clustered data, read immediately
after write before dump)

• Expanded Storage (not typically fit into hierarchy


though)

• Disk
• Optical
• Tape

So you want fast?


• It is possible to build a computer which
uses only static RAM (discussed later)
• This would be very fast
• This would need no cache
—How can you cache cache?
• This would cost a very large amount

8
Locality of Reference
• During the course of the execution of a
program, memory references tend to
cluster
• e.g. ?
• Next expected access most likely would
be to the neighbouring data/instruction in
the cluster
• Bring the whole cluster/block to higher
level so that the %age of accesses to
each successively lower level is reduced –
this principle is known as Locality of
Reference

Cache
• Small amount of fast memory
• Sits between normal main memory and
CPU
• May be located on CPU chip or module
• When processor attempts to read from
memory, a check is made to determine if
the word is in the cache (Hit, Miss) Why block?

9
Cache/Main Memory Structure

Cache operation – overview


• CPU requests contents of memory location
• Check cache for this data
• If present, get from cache (fast)
• If not present, read required block from
main memory to cache/CPU
• Cache includes tags to identify which
block of main memory is in each cache
slot

10
Typical Cache Organization

Cache Read Operation - Flowchart

11
Elements of Cache Design
• Size
• Mapping Function
• Replacement Algorithm
• Write Policy
• Block Size
• Number of Caches

Let’s discuss them one by one

Size does matter


• Thinking straight away… more CASH more
comfortable

• More cache … more faster (but how much cache)

• Problems if increase size


—More Cache needs More CASH (Expensive)
—Checking cache for data takes time
—Available chip & board area also limits the size
• Impossible to arrive at single ―Optimum
size‖ … because performance of the cache is very
sensitive to the nature of workload

12
Comparison of Cache Sizes
Year of
Processor Type L1 cachea L2 cache L3 cache
Introduction
IBM 360/85 Mainframe 1968 16 to 32 KB — —
PDP-11/70 Minicomputer 1975 1 KB — —
VAX 11/780 Minicomputer 1978 16 KB — —
IBM 3033 Mainframe 1978 64 KB — —
IBM 3090 Mainframe 1985 128 to 256 KB — —
Intel 80486 PC 1989 8 KB — —
Pentium PC 1993 8 KB/8 KB 256 to 512 KB —
PowerPC 601 PC 1993 32 KB — —
PowerPC 620 PC 1996 32 KB/32 KB — —
PowerPC G4 PC/server 1999 32 KB/32 KB 256 KB to 1 MB 2 MB
IBM S/390 G4 Mainframe 1997 32 KB 256 KB 2 MB
IBM S/390 G6 Mainframe 1999 256 KB 8 MB —
Pentium 4 PC/server 2000 8 KB/8 KB 256 KB —
High-end server/
IBM SP 2000 64 KB/32 KB 8 MB —
supercomputer
CRAY MTAb Supercomputer 2000 8 KB 2 MB —
Itanium PC/server 2001 16 KB/16 KB 96 KB 4 MB
SGI Origin 2001 High-end server 2001 32 KB/32 KB 4 MB —
Itanium 2 PC/server 2002 32 KB 256 KB 6 MB
IBM POWER5 High-end server 2003 64 KB 1.9 MB 36 MB
CRAY XD-1 Supercomputer 2004 64 KB/64 KB 1MB —

Mapping Function
• Requirements
— We need algorithms for mapping, why? No of block Vs cache lines
— Need a mechanism to determine which block goes to
which
a Two values seperated by a slashcache lineand data caches
refer to instruction
b Both caches are instruction only; no data caches
• Three techniques to address the above
requirements (Direct mapping, Associative mapping, Set
Associative mapping)

• Example data for discussing mapping


— Cache of 64kByte
— Cache block of 4 bytes
– i.e. cache is 16k (214) lines of 4 bytes = 16384
— 16MBytes main memory
— 24 bit address, (224=16M) = 16777216
— Total blocks (224/22=222)= 4194304

13
Example data
Cache Main Memory

Memory
Blocks Contents
Address
Line No Tag

W1 W2 W3 W4 0 W1 B
l
0 1 W2
o
1 2 W3 c
k
2 3 W4 0

3 4 W1 B
l
4 5 W2
o
5 6 W3 c
k
: 7 W4 1

: : :

: : :

16379 : :

16380 16777212 W1 4
1
16381 16777213 W2 9
4
16382 16777214 W3 3
0
16383 16777215 W4
3

Direct Mapping
• Each block of main memory maps to only
one cache line
—i.e. if a block is in cache, it must be in one
specific place
• Address is viewed in three parts
• Least Significant w bits identify unique
word
• Most Significant s bits specify one
memory block
• The MSBs are split into a cache line field r
and a tag of s-r (most significant)

14
Direct Mapping
Address Structure

Tag s-r Line or Slot r Word w


8 14 2

• 24 bit address
• 2 bit word identifier (4 byte block)
• 22 bit block identifier
— 8 bit tag (=22-14)
— 14 bit slot or line
• No two blocks in the same line have the same Tag field
• Check contents of cache by finding line and checking Tag

Direct Mapping
Cache Line Table
• Cache line Main Memory blocks held
• 0 0, m, 2m, 3m…2s-m
• 1 1,m+1, 2m+1…2s-m+1

• m-1 m-1, 2m-1,3m-1…2s-1

• Where m is the number of lines in cache

15
Direct Mapping Cache Organization

Direct Mapping
Example

16
Direct Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words
or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory =
2s+w/2w = 2s
• Number of lines in cache (cache size /block size)=
m = 2r
• Size of tag = (s – r) bits

Direct Mapping pros & cons


• Simple
• Inexpensive
• Fixed location for given block
—If a program accesses 2 blocks that map to
the same line repeatedly, cache misses are
very high

17
Associative Mapping
• To overcome the disadvantage of direct
mapping,
• A main memory block can load into any
line of cache
• Memory address is interpreted as tag and
word
• Tag uniquely identifies block of memory
• Every line’s tag is examined for a match

Fully Associative Cache Organization

18
Associative
Mapping Example

Associative Mapping
Address Structure

Word
Tag 22 bit 2 bit
• 22 bit tag stored with each 32 bit block of data
• Compare tag field with tag entry in cache to
check for hit
• Least significant 2 bits of address identify which
16 bit word is required from 32 bit data block
• e.g.
— Address Tag Data Cache line
— FFFFFC FFFFFC24682468 3FFF

19
Associative Mapping Summary
• Address length = (s + w) bits
• Number of addressable units = 2s+w words
or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2s+
w/2w = 2s

• Number of lines in cache = undetermined??


— Cache size / block size = 64KByte/4Byte=16K Lines
• Size of tag = s bits

Associative Mapping Drawbacks


• 22 bit tag has to be stored in the cache
along with the data
• 22 bit tag of each cache line has to be
compared for match
• Cache searching gets expensive
• Complex circuitry required to examine the
tag of all cache lines in parallel
• New requirement: replacement algorithm
needed to determine which block to
replace

20
Set Associative Mapping
• Exhibits strength of both Direct & Associative
• Cache is divided into a number of sets
• Each set contains a number of lines
• A given block maps to any line in a given
set
—e.g. Block B can be in any line of set i
• e.g. 2 lines per set
—2 way associative mapping
—A given block can be in one of 2 lines in only
one set

Set Associative Mapping


Example
• 13 bit set number why?
— Number of sets = Total no of cache lines/no of lines in
one set=214/2= 213
• Block number in main memory is modulo
213 , Set no (to which block j will map)= j mod v
• 000000, 00A000, 00B000, 00C000 … map
to same set

21
Two Way Set Associative Cache
Organization

Set Associative Mapping


Address Structure

Word
Tag 9 bit Set 13 bit 2 bit

• Use set field to determine cache set to


look in
• Compare tag field to see if we have a hit
• e.g
—Address Tag Data Set number
—1FF 7FFC 1FF 12345678 1FFF
—001 7FFC 001 11223344 1FFF

22
Two Way
Set
Associative
Mapping
Example

Set Associative Mapping Summary


• Address length = (s + w) bits
• Number of addressable units = 2s+w words
or bytes
• Block size = line size = 2w words or bytes
• Number of blocks in main memory = 2d
• Number of lines in set = k
• Number of sets = v = 2d
• Number of lines in cache = kv = k * 2d
• Size of tag = (s – d) bits

23
Replacement Algorithms (1)
Direct mapping
• No choice
• Each block only maps to one line
• Replace that line

24

You might also like