0% found this document useful (0 votes)
13 views

L-4 (Cache Memory)

The document discusses cache memory principles including that cache memory exploits temporal and spatial locality to improve memory access times. It operates as a faster smaller memory between the CPU and main memory, where on a memory access it first checks the cache and if there is a cache miss it transfers a block of data from main memory to cache.

Uploaded by

jubairahmed1678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

L-4 (Cache Memory)

The document discusses cache memory principles including that cache memory exploits temporal and spatial locality to improve memory access times. It operates as a faster smaller memory between the CPU and main memory, where on a memory access it first checks the cache and if there is a cache miss it transfers a block of data from main memory to cache.

Uploaded by

jubairahmed1678
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

CSE 213

Computer Architecture

Lecture 4: Cache Memory

Military Institute of Science and Technology


1
Outline
 Characteristics of Memory Systems
 The Memory Hierarchy
 Cache Memory Principles
 Cache Memory Design
 Pentium 4 Cache Organization

2
Key Characteristics of Computer 3

Memory Systems

Table 4.1 Key Characteristics of Computer Memory Systems


+ 4

Characteristics of Memory
Systems
◼ Location
◼ Refers to whether memory is internal and external to the computer
◼ Internal memory is often equated with main memory
◼ Processor requires its own local memory, in the form of registers
◼ Cache is another form of internal memory
◼ External memory consists of peripheral storage devices that are accessible to
the processor via I/O controllers

◼ Capacity
◼ Memory is typically expressed in terms of bytes

◼ Unit of transfer
◼ For internal memory the unit of transfer is equal to the number of electrical
lines into and out of the memory module
5
Method of Accessing Units of Data

Sequential Direct Random


Associative
access access access
Each addressable A word is retrieved
Memory is location in
organized into Involves a shared based on a portion
read-write memory has a of its contents
units of data mechanism unique, physically rather than its
called records wired-in address
addressing
mechanism
Access must Individual blocks Each location has its
or records have a The time to access a own addressing
be made in a unique address given location is mechanism and
specific linear based on physical independent of the retrieval time is
constant
sequence location sequence of prior independent of
accesses and is location or prior
constant access patterns

Access time is Access time is Any location can


variable variable be selected at Cache memories
random and may employ
directly addressed associative access
and accessed

Example: Tape Example: Disk Example: Main


units units memory and
some cache
systems
Performance 6

The two most important characteristics of


memory – Capacity & Performance

Three performance parameters are used:

Memory cycle time


Access time (latency) Transfer rate
•Access time plus any additional
•For random-access memory it is the time required before second •The rate at which data can be
time it takes to perform a read or access can commence transferred into or out of a memory
write operation •Additional time may be required unit
•For non-random-access memory it for transients to die out on signal •For random-access memory it is
is the time it takes to position the lines or to regenerate data if they equal to 1/(cycle time)
read-write mechanism at the are read destructively
desired location •Concerned with the system bus,
not the processor
+ Physical Type 7

◼ The most common forms are:


◼ Semiconductor memory
◼ Magnetic surface memory
◼ Optical
◼ Magneto-optical

Physical characteristics
◼ Volatile memory
◼ Information decays naturally or is lost when electrical power is switched off
◼ Nonvolatile memory
◼ Once recorded, information remains without deterioration until deliberately changed
◼ No electrical power is needed to retain information
◼ Magnetic-surface memories - are nonvolatile
◼ Semiconductor memory - may be either volatile or nonvolatile
◼ Nonerasable memory
◼ Cannot be altered, except by destroying the storage unit
◼ Semiconductor memory of this type is known as read-only memory (ROM)

Organization: For random-access memory the organization is a key design


issue. Organization refers to the physical arrangement of bits to form words
+ 8

Memory Hierarchy

◼ Design constraints on a computer’s memory can be summed


up by three questions:
◼ How much, how fast, how expensive

◼ There is a trade-off among capacity, access time, and cost


◼ Faster access time, greater cost per bit
◼ Greater capacity, smaller cost per bit
◼ Greater capacity, slower access time

◼ The way out of the memory dilemma is not to rely on a single


memory component or technology, but to employ a memory
hierarchy
+ Memory Hierarchy - Diagram 9

Figure 4.1 The Memory Hierarchy


+ 10

Locality of Reference –
(d) decreasing the frequency of access

◼ During the course of the execution of a program, memory


references tend to cluster e.g. loops

Locality of reference, also known as the principle of locality, is


a phenomenon describing the same value, or related
storage locations, being frequently accessed. There are two basic
types of reference locality – temporal and spatial locality.

Temporal locality. If at one point in time a particular memory location


is referenced, then it is likely that the same location will be referenced
again in the near future.

Spatial locality. If a particular memory location is referenced at a


particular time, then it is likely that nearby memory locations will be
referenced in the near future.
+ 11

Operation of Two-Level Memory


❑ The locality property can be exploited in the formation of a
two-level memory. The upper-level memory (M1) is smaller,
faster, and more expensive (per bit) than the lower-level
memory (M2).

❑ M1 is used as a temporary store for part of the contents of the


larger M2.

❑ When a memory reference is made, an attempt is made to


access the item in M1. If this succeeds, then a quick access is
made. If not, then a block of memory locations is copied from
M2 to M1 and the access then takes place via M1.

❑Because of locality, once a block is brought into M1, there


should be a number of accesses to locations in that block,
resulting in fast overall service.
+ 12

Operation of Two-Level Memory

To express the average time to access an item, we must consider


not only the speeds of the two levels of memory, but also the
probability that a given reference can be found in M1. We have

Ts= H x T1 +(1-H)x (T1+T2)

where

Ts = average (system) access time

T1 = access time of M1 (e.g., cache, disk cache)

T2 = access time of M2 (e.g., main memory, disk)

H = hit ratio (fraction of time reference is found in M1)


+ 13

Example-4.1
+ 14

CACHE MEMORY PRINCIPLES


+ 15

Cache

A technique, sometimes referred to as a disk cache, improves


performance in two ways:

• Disk writes are clustered. Instead of many small transfers of


data, we have a few large transfers of data. This improves
disk performance and minimizes processor involvement.

• Some data destined for write-out may be referenced by a


program before the next dump to disk. In that case, the data
are retrieved rapidly from the Cache rather than slowly from
the disk.
+ 16

Cache

◼ Small amount of fast memory

◼ Sits between normal main memory and CPU

◼ May be located on CPU chip or module


Cache and Main Memory
17
Cache/Main Memory Structure 20
21

Cache Read
Operation
+ 22

Typical Cache Organization


+ 23

ELEMENTS OF CACHE DESIGN


24
+ 25

Cache Addresses
Virtual Memory

◼ Virtual memory
◼ Facility that allows programs to address memory from a logical
point of view, without regard to the amount of main memory
physically available
◼ When used, the address fields of machine instructions contain
virtual addresses
◼ For reads to and writes from main memory, a hardware memory
management unit (MMU) translates each virtual address into a
physical address in main memory
+ 26

Logical and Physical Caches


27

Table 4.3

Cache
Sizes of
Some
Processors

aTwo values
separated by a
slash refer to
instruction and
data caches.

b Both caches are


instruction only; no
data caches.
28

Mapping Function
◼ Because there are fewer cache lines than main memory
blocks, an algorithm is needed for mapping main memory
blocks into cache lines

◼ Three techniques can be used:

Direct Associative Set Associative


• The simplest technique • Permits each main • A compromise that
• Maps each block of main memory block to be exhibits the strengths of
memory into only one loaded into any line of the both the direct and
possible cache line cache associative approaches
while reducing their
• The cache control logic disadvantages
interprets a memory
address simply as a Tag
and a Word field
• To determine whether a
block is in the cache, the
cache control logic must
simultaneously examine
every line’s Tag for a
match
29

Direct Mapping
30

Direct Mapping
Direct Mapping Cache Organization 31
Direct Mapping Cache Example 32
+ 33

Direct Mapping Summary

◼ Address length = (s + w) bits

◼ Number of addressable units = 2s+w words or bytes

◼ Block size = line size = 2w words or bytes

◼ Number of blocks in main memory = 2s+ w/2w = 2s

◼ Number of lines in cache = m = 2r

◼ Size of tag = (s – r) bits


+ 34

Victim Cache

◼ Originally proposed as an approach to reduce the conflict misses of


direct mapped caches without affecting its fast access time

◼ Fully associative cache

◼ Typical size is 4 to 16 cache lines

◼ Residing between direct mapped L1 cache and the next level of


memory
35
Associative Cache Mapping
36
Fully Associative Cache Organization
37
Associative Mapping Example
+ 38

Associative Mapping Summary

◼ Address length = (s + w) bits

◼ Number of addressable units = 2 s+w words or bytes

◼ Block size = line size = 2w words or bytes

◼ Number of blocks in main memory = 2 s+ w/2w = 2s

◼ Number of lines in cache = undetermined

◼ Size of tag = s bits


+ 39

Set Associative Mapping

◼ Compromise that exhibits the strengths of both the direct and


associative approaches while reducing their disadvantages

◼ Cache consists of a number of sets

◼ Each set contains a number of lines

◼ A given block maps to any line in a given set

◼ e.g. 2 lines per set


◼ 2 way associative mapping
◼ A given block can be in one of 2 lines in only one set
+

Mapping From
Main Memory
to Cache:

k-Way
Set
Associative
k-Way 41

Set
Associative
Cache
Organization
+ 42

Set Associative Mapping Summary

◼ Address length = (s + w) bits

◼ Number of addressable units = 2 s+w words or bytes

◼ Block size = line size = 2 w words or bytes

◼ Number of blocks in main memory = 2 s+w/2w=2s

◼ Number of lines in set = k

◼ Number of sets = v = 2 d

◼ Number of lines in cache = m=kv = k * 2 d

◼ Size of cache = k * 2d+w words or bytes

◼ Size of tag = (s – d) bits


43
+ 44

Varying Associativity Over Cache Size


+ 45

Problem

◼ Assume the size of a main memory in a computer is


1Mbytes. The block size of the main memory is 16 Bytes.
The size of each word is 1Byte. The size of the cache memory
is 64KBytes.

◼ Draw the Main memory format for direct mapping,


associative mapping and two way set associative mapping.

◼ direct mapping 4 bit tag 12 bits 4 bit word

◼ associative mapping 16 bit Tag 4 bit word

◼ 2 way set associative mapping 5 bit tag 11 bit set 4 bit word
+ 46

Problem

◼ A set-associative cache consists of 64 lines, or slots, divided


into four-line sets. Main memory contains 4K blocks of 128
words each. Show the format of main memory addresses.

◼ A two-way set-associative cache has lines of 16 bytes and a


total size of 8 Kbytes. The 64-Mbyte main memory is byte
addressable. Show the format of main memory addresses.
+ 47

Replacement Algorithms

◼ Once the cache has been filled, when a new block is brought
into the cache, one of the existing blocks must be replaced

◼ For direct mapping there is only one possible line for any
particular block and no choice is possible

◼ For the associative and set-associative techniques a


replacement algorithm is needed

◼ To achieve high speed, an algorithm must be implemented in


hardware
+ 48

The four most common


replacement algorithms are:
◼ Least recently used (LRU)
◼ Most effective
◼ Replace that block in the set that has been in the cache longest with
no reference to it
◼ Because of its simplicity of implementation, LRU is the most popular
replacement algorithm

◼ First-in-first-out (FIFO)
◼ Replace that block in the set that has been in the cache longest
◼ Easily implemented as a round-robin or circular buffer technique

◼ Least frequently used (LFU)


◼ Replace that block in the set that has experienced the fewest
references
◼ Could be implemented by associating a counter with each line
Write Policy
49

When a block that is resident in


There are two problems to
the cache is to be replaced
contend with:
there are two cases to consider:

If the old block in the cache has not been


altered then it may be overwritten with a More than one device may have access to
new block without first writing out the old main memory
block

If at least one write operation has been A more complex problem occurs when
performed on a word in that line of the multiple processors are attached to the
cache then main memory must be same bus and each processor has its own
updated by writing the line of cache out local cache - if a word is altered in one
to the block of memory before bringing cache it could conceivably invalidate a
in the new block word in other caches
+ 50

Write Through
and Write Back
◼ Write through
◼ Simplest technique
◼ All write operations are made to main memory as well as to the cache
◼ The main disadvantage of this technique is that it generates substantial
memory traffic and may create a bottleneck

◼ Write back
◼ Minimizes memory writes
◼ Updates are made only in the cache
◼ Portions of main memory are invalid and hence accesses by I/O
modules can be allowed only through the cache
◼ This makes for complex circuitry and a potential bottleneck
+ 51

Cache coherency
Even if a write-through policy is used, the other caches
may contain invalid data. A system that prevents this
problem is said to maintain cache coherency. Possible
approaches to cache coherency include the following:

1. Hardware transparency: Additional hardware is used


to ensure all updates

2. Non-cacheable memory: A portion of main memory is


shared by more than one processor, and non-cacheable.

3. Bus watching with write through: Each cache


controller monitors the address lines to detect write
operations to memory by other bus masters.
Line Size
52

When a block of Two specific effects


data is retrieved come into play:
and placed in the • Larger blocks reduce the
cache not only the As the block size number of blocks that fit into
desired word but increases more a cache
also some number useful data are • As a block becomes larger
each additional word is
of adjacent words brought into the farther from the requested
are retrieved cache word

As the block size The hit ratio will


increases the hit begin to decrease
ratio will at first as the block
increase because becomes bigger
of the principle of and the probability
locality of using the newly
fetched
information
becomes less than
the probability of
reusing the
information that
has to be replaced
+ 53

Multilevel Caches
◼ As logic density has increased it has become possible to have a cache
on the same chip as the processor

◼ The on-chip cache reduces the processor’s external bus activity and
speeds up execution time and increases overall system performance
◼ When the requested instruction or data is found in the on-chip cache, the bus
access is eliminated
◼ On-chip cache accesses will complete appreciably faster than would even
zero-wait state bus cycles
◼ During this period the bus is free to support other transfers

◼ Two-level cache:
◼ Internal cache designated as level 1 (L1)
◼ External cache designated as level 2 (L2)

◼ Potential savings due to the use of an L2 cache depends on the hit rates
in both the L1 and L2 caches

◼ The use of multilevel caches complicates all of the design issues related
to caches, including size, replacement algorithm, and write policy
Hit Ratio (L1 & L2) 54

For 8 Kbyte and 16 Kbyte L1


+ 55

Unified Versus Split Caches

◼ Has become common to split cache:


◼ One dedicated to instructions
◼ One dedicated to data
◼ Both exist at the same level, typically as two L1 caches

◼ Advantages of unified cache:


◼ Higher hit rate
◼ Balances load of instruction and data fetches automatically
◼ Only one cache needs to be designed and implemented

◼ Trend is toward split caches at the L1 and unified caches for


higher levels

◼ Advantages of split cache:


◼ Eliminates cache contention between instruction fetch/decode unit
and execution unit
◼ Important in pipelining
56

PENTIUM 4 CACHE ORGANIZATION


Intel Cache Evolution
57
58
Pentium 4 Block Diagram
Pentium 4 Cache Operating Modes
59

CD (cache disable)
NW (not write-through)

Table 4.5 Pentium 4 Cache Operating Modes


ARM Cache Features
60
61

ARM Cache and Write Buffer Organization


+ Summary
62

Cache
Memory
Chapter 4

◼ Characteristics of Memory ◼ Elements of cache design


Systems ◼ Cache addresses
◼ Location ◼ Cache size
◼ Capacity ◼ Mapping function
◼ Unit of transfer ◼ Replacement algorithms

◼ Memory Hierarchy ◼ Write policy


◼ Line size
◼ How much?
◼ Number of caches
◼ How fast?
◼ How expensive? ◼ Pentium 4 cache organization
◼ Cache memory principles ◼ ARM cache organization
63

Thank you for the patience :)

You might also like