Onur Comparch Fall2017 Lecture2 Fundamentals Memoryhierarchy Caches Afterlecture
Onur Comparch Fall2017 Lecture2 Fundamentals Memoryhierarchy Caches Afterlecture
Lecture 2: Fundamentals,
Memory Hierarchy, Caches
n Memory hierarchy
n Caches
2
Takeaway From Lecture 1
n In Computer Architecture
4
A Note on Hardware vs. Software
n This course might seem like it is only “Computer Hardware”
5
What Do I Expect From You?
n Required background: Digital circuits course, programming, an
open mind willing to take in many exciting concepts.
7
How Will You Be Evaluated?
8
What Will You Learn
n Computer Architecture: The science and art of
designing, selecting, and interconnecting hardware
components and designing the hardware/software interface
to create a computing system that meets functional,
performance, energy consumption, cost, and other specific
goals.
Problem
Algorithm
Program/Language
Runtime System
(VM, OS, MM)
ISA (Architecture)
Microarchitecture
Logic
Circuits
Electrons
10
Levels of Transformation, Revisited
n A user-centric view: computer designed for users
Problem
Algorithm
Program/Language User
Runtime System
(VM, OS, MM)
ISA
Microarchitecture
Logic
Circuits
Electrons
13
Course Website
n http://safari.ethz.ch/architecture
14
Homework 0
n Due Sep 27
q https://safari.ethz.ch/farm/architecture_fs17/doku.php?
id=homeworks
15
Heads Up
n We will have a few required review assignments
q Due likely end of next week
16
Why Study Computer
Architecture?
17
What is Computer Architecture?
18
An Enabler: Moore’s Law
n Only 3 pages
n A quote:
“With unit cost falling as the number of components per
circuit rises, by 1975 economics may dictate squeezing as
many as 65 000 components on a single silicon chip.”
n Another quote:
“Will it be possible to remove the heat generated by tens of
thousands of components in a single silicon chip?”
21
What Do We Use These Transistors for?
n Your readings for this week should give you an idea…
n Required
q Patt, “Requirements, Bottlenecks, and Good Fortune: Agents for
Microprocessor Evolution,” Proceedings of the IEEE 2001.
22
Why Study Computer Architecture?
n Enable better systems: make computers faster, cheaper,
smaller, more reliable, …
q By exploiting advances and changes in underlying technology/circuits
ISA
Microarchitecture
Many new issues Logic
at the bottom Circuits
(Look Down)
Electrons
26
Computer Architecture Today (IV)
n You can revolutionize the way computers are built, if you
understand both the hardware and the software (and
change each accordingly)
27
Computer Architecture Today (IV)
n You can revolutionize the way computers are built, if you
understand both the hardware and the software (and
change each accordingly)
28
… but, first …
n Let’s understand the fundamentals…
29
Fundamental Concepts
30
What is A Computer?
n Three key components
n Computation
n Communication
n Storage (memory)
31
What is A Computer?
n We will cover all three components
Processing
control Memory
(sequencing) (program I/O
and data)
datapath
32
The Von Neumann Model/Architecture
n Also called stored program computer (instructions in
memory). Two key properties:
n Stored program
q Instructions stored in a linear memory array
q Memory is unified between instructions and data
n The interpretation of a stored value depends on the control
signals When is a value interpreted as an instruction?
n Stored program
34
The Von Neumann Model (of a Computer)
MEMORY
Mem Addr Reg
PROCESSING UNIT
INPUT OUTPUT
ALU TEMP
CONTROL UNIT
IP Inst Register
35
The Von Neumann Model (of a Computer)
n Q: Is this the only way that a computer can operate?
n A: No.
n Qualified Answer: No, but it has been the dominant way
q i.e., the dominant paradigm for computing
q for N decades
36
The Dataflow Model (of a Computer)
n Von Neumann model: An instruction is fetched and
executed in control flow order
q As specified by the instruction pointer
q Sequential unless explicit control flow instruction
z
n Which model is more natural to you as a programmer?
38
More on Data Flow
n In a data flow machine, a program consists of data flow
nodes
q A data flow node fires (fetched and executed) when all it
inputs are ready
n i.e. when all inputs have tokens
39
Data Flow Nodes
40
An Example Data Flow Program
OUT
41
ISA-level Tradeoff: Instruction Pointer
n Do we need an instruction pointer in the ISA?
q Yes: Control-driven, sequential execution
n An instruction is executed when the IP points to it
n IP automatically changes sequentially (except for control flow
instructions)
q No: Data-driven, parallel execution
n An instruction is executed when all its operand values are
available (data flow)
42
ISA vs. Microarchitecture Level Tradeoff
n A similar tradeoff (control vs. data-driven execution) can be
made at the microarchitecture level
44
The Von-Neumann Model
n All major instruction set architectures today use this model
q x86, ARM, MIPS, SPARC, Alpha, POWER
49
Microarchitecture
n Implementation of the ISA under specific design constraints
and goals
n Anything done in hardware without exposure to software
q Pipelining
q In-order versus out-of-order instruction execution
q Memory access scheduling policy
q Speculative execution
q Superscalar processing (multiple instruction issue?)
q Clock gating
q Caching? Levels, size, associativity, replacement policy
q Prefetching?
q Voltage/frequency scaling?
q Error correction?
50
Property of ISA vs. Uarch?
n ADD instruction’s opcode
n Number of general purpose registers
n Number of ports to the register file
n Number of cycles to execute the MUL instruction
n Whether or not the machine employs pipelined instruction
execution
n Remember
q Microarchitecture: Implementation of the ISA under specific
design constraints and goals
51
Design Point
n A set of design considerations and their importance
q leads to tradeoffs in both ISA and uarch
n Considerations Problem
q Cost Algorithm
q Performance Program
53
Tradeoffs: Soul of Computer Architecture
n ISA-level tradeoffs
n Microarchitecture-level tradeoffs
54
Why Is It (Somewhat) Art?
New demands Problem
from the top Algorithm
(Look Up) New demands and
Program/Language User
personalities of users
(Look Up)
Runtime System
(VM, OS, MM)
ISA
Microarchitecture
New issues and Logic
capabilities
Circuits
at the bottom
(Look Down) Electrons
Runtime System
(VM, OS, MM)
ISA
Microarchitecture
Changing issues and Logic
capabilities
Circuits
at the bottom
(Look Down and Forward) Electrons
57
We Covered a Lot of This in
Digital Circuits & Computer Architecture
One Slide Overview of Digital Circuits SS17
n Logic Design, Verilog, FPGAs
n ISA (MIPS)
n Single-cycle Microarchitectures
n Pipelining
n Out-of-Order Execution
60
Digital Circuits Materials for Review (I)
n All Digital Circuits Lecture Videos Are Online:
q https://www.youtube.com/playlist?list=PL5Q2soXY2Zi-
IXWTT7xoNYpst5-zdZQ6y
61
Digital Circuits Materials for Review (II)
n Particularly useful and relevant lectures for this course
63
Tentative Agenda (Upcoming Lectures)
n The memory hierarchy
n Caches, caches, more caches (high locality, high bandwidth)
n Virtualizing the memory hierarchy
n Main memory: DRAM
n Main memory control, scheduling, interference, management
n Memory latency tolerance and prefetching techniques
n Non-volatile memory & emerging technoogies
n Multiprocessors
n Coherence and consistency
n Interconnection networks
n Multi-core issues
n Multithreading
64
Optional Readings for Today & Next Week
n Memory Hierarchy and Caches
65
Memory (Programmer’s View)
66
Abstraction: Virtual vs. Physical Memory
n Programmer sees virtual memory
q Can assume the memory is “infinite”
n Reality: Physical memory size is much smaller than what
the programmer assumes
n The system (system software + hardware, cooperatively)
maps virtual memory addresses to physical memory
q The system automatically manages the physical memory
space transparently to the programmer
68
Idealism
Pipeline
Instruction (Instruction Data
Supply Supply
execution)
DRAM BANKS
Memory in a Modern System
DRAM INTERFACE
DRAM MEMORY
CORE 1
CORE 3
CONTROLLER
L2 CACHE 1 L2 CACHE 3
L2 CACHE 0 L2 CACHE 2
CORE 2
CORE 0
SHARED L3 CACHE
Ideal Memory
n Zero access time (latency)
n Infinite capacity
n Zero cost
n Infinite bandwidth (to support multiple accesses in parallel)
72
The Problem
n Ideal memory’s requirements oppose each other
n Bigger is slower
q Bigger à Takes longer to determine the location
73
Memory Technology: DRAM
n Dynamic random access memory
n Capacitor charge state indicates stored value
q Whether the capacitor is charged or discharged indicates
storage of 1 or 0
q 1 capacitor
q 1 access transistor
row enable
n Capacitor leaks through the RC path
q DRAM cell loses charge over time
_bitline
q DRAM cell needs to be refreshed
74
Memory Technology: SRAM
n Static random access memory
n Two cross coupled inverters store a single bit
q Feedback path enables the stored value to persist in the “cell”
q 4 transistors for storage
q 2 transistors for access
row select
_bitline
bitline
75
Memory Bank Organization and Operation
n Read access sequence:
4. Decode column
address & select subset
of row
• Send to output
5. Precharge bit-lines
• For next access
76
SRAM (Static Random Access Memory)
Read Sequence
row select 1. address decode
2. drive row select
3. selected bit-cells drive bitlines
_bitline
bitline
n SRAM
q Faster access (no capacitor)
q Lower density (6T cell)
q Higher cost
q No need for refresh
q Manufacturing compatible with logic process (no capacitor)
79
The Problem
n Bigger is slower
q SRAM, 512 Bytes, sub-nanosec
q SRAM, KByte~MByte, ~nanosec
q DRAM, Gigabyte, ~50 nanosec
q Hard Disk, Terabyte, ~10 millisec
81
The Memory Hierarchy
Hard Disk
Main
CPU Cache Memory
RF (DRAM)
83
Locality
n One’s recent past is a very good predictor of his/her near
future.
84
Memory Locality
n A “typical” program has a lot of locality in memory
references
q typical programs are composed of “loops”
85
Caching Basics: Exploit Temporal Locality
n Idea: Store recently accessed data in automatically
managed fast memory (called cache)
n Anticipation: the data will be accessed again soon
86
Caching Basics: Exploit Spatial Locality
n Idea: Store addresses adjacent to the recently accessed
one in automatically managed fast memory
q Logically divide memory into equal size blocks
q Fetch to cache the accessed block in its entirety
n Anticipation: nearby data will be accessed soon
87
The Bookshelf Analogy
n Book in your hand
n Desk
n Bookshelf
n Boxes at home
n Boxes in storage
88
Caching in a Pipelined Design
n The cache needs to be tightly integrated into the pipeline
q Ideally, access in 1-cycle so that dependent operations do not
stall
n High frequency pipeline à Cannot make the cache large
q But, we want a large cache AND a pipelined design
n Idea: Cache hierarchy
Main
Level 2 Memory
CPU Level1 Cache (DRAM)
RF Cache
89
A Note on Manual vs. Automatic Management
n Manual: Programmer manages data movement across levels
-- too painful for programmers on substantial programs
q “core” vs “drum” memory in the 50’s
n You don’t need to know how big the cache is and how it works to
write a “correct” program! (What if you want a “fast” program?)
90
Automatic Management in Memory Hierarchy
n Wilkes, “Slave Memories and Dynamic Storage Allocation,”
IEEE Trans. On Electronic Computers, 1965.
92
A Modern Memory Hierarchy
Register File
32 words, sub-nsec
manual/compiler
Memory register spilling
L1 cache
AbstracWon ~32 KB, ~nsec
L2 cache
512 KB ~ 1MB, many nsec AutomaWc
HW cache
L3 cache, management
.....
q hi + mi = 1
n Thus
Ti = hi·ti + mi·(ti + Ti+1)
Ti = ti + mi ·Ti+1
n Keep mi low
q increasing capacity Ci lowers mi, but beware of increasing ti
q lower mi by smarter management (replacement::anticipate what you
don’t need, prefetching::anticipate what you will need)
98
Caching Basics
n Block (line): Unit of storage in the cache
q Memory is logically divided into cache blocks that map to
locations in the cache
n On a reference:
q HIT: If in cache, use cached data instead of accessing memory
q MISS: If not in cache, bring block into cache
n Maybe have to kick something else out to do it
Address
Tag Store Data Store
Hit/miss? Data
101
Blocks and Addressing the Cache
n Memory is logically divided into fixed-size blocks
8-bit address
n Cache access:
1) index into the tag and data stores with index bits in address
2) check valid bit in tag store
3) compare tag bits in address with the stored tag in tag store
V tag
byte in block
=? MUX
Hit? Data
q Addresses with same index contend for the same location
n Cause conflict misses
103
Direct-Mapped Caches
n Direct-mapped cache: Two blocks in memory that map to
the same index in the cache cannot be present in the cache
at the same time
q One index à one entry
104
Set Associativity
n Addresses 0 and 8 always conflict in direct mapped cache
n Instead of having one column of 8, have 2 columns of 4 blocks
V tag V tag
=? =? MUX
=? =? =? =?
Logic Hit?
Data store
MUX
byte in block
MUX
Tag store
=? =? =? =? =? =? =? =?
Logic
Hit?
Data store
MUX
byte in block
MUX
107
Associativity (and Tradeoffs)
n Degree of associativity: How many blocks can map to the
same index (or set)?
n Higher associativity
++ Higher hit rate
-- Slower cache access time (hit latency and data access latency)
-- More expensive hardware (more comparators)
hit rate
associativity
108
Computer Architecture
Lecture 2: Fundamentals,
Memory Hierarchy, Caches
112
Implementing LRU
n Idea: Evict the least recently accessed block
n Problem: Need to keep track of access ordering of blocks
113
Approximations of LRU
n Most modern processors do not implement “true LRU” (also
called “perfect LRU”) in highly-associative caches
n Why?
q True LRU is complex
q LRU is an approximation to predict locality anyway (i.e., not
the best possible cache management policy)
n Examples:
q Not MRU (not most recently used)
q Hierarchical LRU: divide the N-way set into M “groups”, track
the MRU group and the MRU way in each group
q Victim-NextVictim Replacement: Only keep track of the victim
and the next victim
114
Hierarchical LRU (not MRU)
n Divide a set into multiple groups
n Keep track of only the MRU group
n Keep track of only the MRU block in each group
115
Hierarchical LRU (not MRU): Questions
n 16-way cache
n 2 8-way groups
116
Victim/Next-Victim Policy
n Only 2 blocks’ status tracked in each set:
q victim (V), next victim (NV)
q all other blocks denoted as (O) – Ordinary block
n On a cache miss
q Replace V
q Demote NV to V
q Randomly pick an O block as NV
n On a cache hit to V
q Demote NV to V
q Randomly pick an O block as NV
q Turn V to O
117
Victim/Next-Victim Policy (II)
n On a cache hit to NV
q Randomly pick an O block as NV
q Turn NV to O
n On a cache hit to O
q Do nothing
118
Victim/Next-Victim Example
119
Cache Replacement Policy: LRU or Random
n LRU vs. Random: Which one is better?
q Example: 4-way cache, cyclic references to A, B, C, D, E
n 0% hit rate with LRU policy
n Set thrashing: When the “program working set” in a set is
larger than set associativity
q Random replacement policy is better when thrashing occurs
n In practice:
q Depends on workload
q Average hit rate of LRU and Random are similar
121
Aside: Cache versus Page Replacement
n Physical memory (DRAM) is a cache for disk
q Usually managed by system software via the virtual memory
subsystem
n Dirty bit?
q Write back vs. write through caches
123
Handling Writes (I)
n When do we write the modified data in a cache to the next level?
n Write through: At the time the write happens
n Write back: When the block is evicted
q Write-back
+ Can combine multiple writes to the same block before eviction
q Potentially saves bandwidth between cache levels + saves energy
-- Need a bit in the tag store indicating the block is “dirty/modified”
q Write-through
+ Simpler
+ All levels are up to date. Consistency: Simpler cache coherence because
no need to check lower-level caches
-- More bandwidth intensive; no combining of writes
124
Handling Writes (II)
n Do we allocate a cache block on a write miss?
q Allocate on write miss: Yes
q No-allocate on write miss: No
n No-allocate
+ Conserves cache space if locality of writes is low (potentially
better cache hit rate)
125
Handling Writes (III)
n What if the processor writes to an entire block over a small
amount of time?
n Is there any need to bring the block into the cache from
memory in the first place?
126
Sectored Caches
n Idea: Divide a block into subblocks (or sectors)
q Have separate valid and dirty bits for each sector
q When is this useful? (Think writes…)
n Unified:
+ Dynamic sharing of cache space: no overprovisioning that
might happen with static partitioning (i.e., split I and D
caches)
-- Instructions and data can thrash each other (i.e., no
guaranteed space for either)
-- I and D are accessed in different places in the pipeline. Where
do we place the unified cache for fast access?
n Block size
n Associativity
n Replacement policy
n Insertion/Placement policy
131
Cache Size
n Cache size: total data (not including tag) capacity
q bigger can exploit temporal locality better
q not ALWAYS better
n Too large a cache adversely affects hit and miss latency
q smaller is faster => bigger is slower
q access time may degrade critical path
hit rate
n Too small a cache
q doesn’t exploit temporal locality well
q useful data replaced often “working set”
size
134
Associativity
n How many blocks can map to the same index (or set)?
n Larger associativity
q lower miss rate (reduced conflicts)
q higher hit latency and area cost (plus diminishing returns)
hit rate
n Smaller associativity
q lower cost
q lower hit latency
n Especially important for L1 caches
135
Classification of Cache Misses
n Compulsory miss
q first reference to an address (block) always results in a miss
q subsequent references should hit unless the cache block is
displaced for the reasons below
n Capacity miss
q cache is too small to hold everything needed
q defined as the misses that would occur even in a fully-associative
cache (with optimal replacement) of the same capacity
n Conflict miss
q defined as any miss that is neither a compulsory nor a capacity
miss
136
How to Reduce Each Miss Type
n Compulsory
q Caching cannot help
q Prefetching can
n Conflict
q More associativity
q Other ways to get more associativity without making the
cache associative
n Victim cache
n Better, randomized indexing
n Software hints?
n Capacity
q Utilize cache space better: keep blocks that will be referenced
q Software management: divide working set such that each
“phase” fits in cache
137
How to Improve Cache Performance
n Three fundamental goals
138
Improving Basic Cache Performance
n Reducing miss rate
q More associativity
q Alternatives/enhancements to associativity
n Victim caches, hashing, pseudo-associativity, skewed associativity
q Better replacement/insertion policies
q Software approaches
n Reducing miss latency/cost
q Multi-level caches
q Critical word first
q Subblocking/sectoring
q Better replacement/insertion policies
q Non-blocking caches (multiple cache misses in parallel)
q Multiple accesses per cycle
q Software approaches
139
Cheap Ways of Reducing Conflict Misses
n Instead of building highly-associative caches:
n Victim Caches
n Hashed/randomized Index Functions
n Pseudo Associativity
n Skewed Associative Caches
n …
140
Victim Cache: Reducing Conflict Misses
Victim
Direct cache
Mapped Next Level
Cache Cache
141
Hashing and Pseudo-Associativity
n Hashing: Use better “randomizing” index functions
+ can reduce conflict misses
n by distributing the accessed memory blocks more evenly to sets
n Example of conflicting accesses: strided access pattern where
stride value equals number of sets in cache
-- More complex to implement: can lengthen critical path
143
Skewed Associative Caches (I)
n Basic 2-way associative cache structure
Way 0 Way 1
Same index function
for each way
=? =?
144
Skewed Associative Caches (II)
n Skewed associative caches
q Each bank has a different index function
same index
redistributed to same index
Way 0 different sets same set Way 1
f0
145
Skewed Associative Caches (III)
n Idea: Reduce conflict misses by using different index
functions for each cache way
146
Software Approaches for Higher Hit Rate
n Restructuring data access patterns
n Restructuring data layout
n Loop interchange
n Data structure separation/merging
n Blocking
n …
147
Restructuring Data Access Patterns (I)
n Idea: Restructure data layout or data access patterns
n Example: If column-major
q x[i+1,j] follows x[i,j] in memory
q x[i,j+1] is far away from x[i,j]
149
Restructuring Data Layout (I)
n Pointer based traversal
struct Node { (e.g., of a linked list)
struct Node* node;
int key; n Assume a huge linked
char [256] name; list (1M nodes) and
char [256] school; unique keys
}
n Why does the code on
while (node) { the left have poor cache
if (nodeàkey == input-key) { hit rate?
// access other fields of node q “Other fields” occupy
} most of the cache line
node = nodeànext;
even though rarely
}
accessed!
150
Restructuring Data Layout (II)
struct Node { n Idea: separate frequently-
struct Node* node; used fields of a data
int key; structure and pack them
struct Node-data* node-data;
} into a separate data
structure
struct Node-data {
char [256] name;
char [256] school; n Who should do this?
} q Programmer
q Compiler
while (node) {
n Profiling vs. dynamic
if (nodeàkey == input-key) {
// access nodeànode-data q Hardware?
} q Who can determine what
node = nodeànext; is frequently used?
}
151
Improving Basic Cache Performance
n Reducing miss rate
q More associativity
q Alternatives/enhancements to associativity
n Victim caches, hashing, pseudo-associativity, skewed associativity
q Better replacement/insertion policies
q Software approaches
n Reducing miss latency/cost
q Multi-level caches
q Critical word first
q Subblocking/sectoring
q Better replacement/insertion policies
q Non-blocking caches (multiple cache misses in parallel)
q Multiple accesses per cycle
q Software approaches
152
Miss Latency/Cost
n What is miss latency or miss cost affected by?
q Where does the miss get serviced from?
n Local vs. remote memory
n What level of cache in the hierarchy?
n Row hit versus row miss
n Queueing delays in the memory controller and the interconnect
n …
q How much does the miss stall the processor?
n Is it overlapped with other latencies?
n Is the data immediately needed?
n …
153
Memory Level Parallelism (MLP)
155
An Example
P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3
S1Cache
P4 P3 P2 S3 P1
S2 P4 S1
P3 S2
P2 S3
P1 P4P4P3S1P2
P4S2P1
P3S3P4
P2 P3
S1 P2P4
S2P3 P2 S3
P4 P3 P2 P1 P1 P2 P3 P4 S1 S2 S3
Hit/Miss H H H M H H H H M M M
Misses=4
Time stall Stalls=4
Belady’s OPT replacement
Hit/Miss H M M M H M M M H H H
Saved
Time stall Misses=6
cycles
Stalls=2
MLP-Aware replacement
MLP-Aware Cache Replacement
n How do we incorporate MLP into replacement decisions?
158
Other Recommended Cache Papers (I)
159
Other Recommended Cache Papers (II)
160
Enabling Multiple Outstanding Misses
Handling Multiple Outstanding Accesses
n Question: If the processor can generate multiple cache
accesses, can the later accesses be handled while a
previous miss is outstanding?
162
Handling Multiple Outstanding Accesses
n Idea: Keep track of the status/data of misses that are being
handled in Miss Status Handling Registers (MSHRs)
163
Miss Status Handling Register
n Also called “miss buffer”
n Keeps track of
q Outstanding cache misses
q Pending load/store accesses that refer to the missing cache
block
n Fields of a single MSHR entry
q Valid bit
q Cache block address (to match incoming accesses)
q Control/status bits (prefetch, issued to memory, which
subblocks have arrived, etc)
q Data for each subblock
q For each pending load/store
n Valid, type, data size, byte in block, destination register or store
buffer entry address
164
Miss Status Handling Register Entry
165
MSHR Operation
n On a cache miss:
q Search MSHRs for a pending access to the same block
n Found: Allocate a load/store entry in the same MSHR entry
n Not found: Allocate a new MSHR
n No free entry: stall
166
Non-Blocking Cache Implementation
n When to access the MSHRs?
q In parallel with the cache?
q After cache access is complete?
167
Enabling High Bandwidth Memories
Multiple Instructions per Cycle
n Can generate multiple cache/memory accesses per cycle
n How do we ensure the cache/memory can handle multiple
accesses in the same clock cycle?
n Solutions:
q true multi-porting
q banking (interleaving)
169
Handling Multiple Accesses per Cycle (I)
n True multiporting
q Each memory cell has multiple read or write ports
+ Truly concurrent accesses (no conflicts on read accesses)
-- Expensive in terms of latency, power, area
q What about read and write to the same location at the same
time?
n Peripheral logic needs to handle this
170
Peripheral Logic for True Multiporting
171
Peripheral Logic for True Multiporting
172
Handling Multiple Accesses per Cycle (II)
n Virtual multiporting
q Time-share a single port
q Each access needs to be (significantly) shorter than clock cycle
q Used in Alpha 21264
q Is this scalable?
173
Handling Multiple Accesses per Cycle (III)
n Multiple cache copies
q Stores update both caches
q Loads proceed in parallel
Port 1
n Used in Alpha 21164 Load Port 1
Cache
Copy 1 Data
n Scalability?
q Store operations cause a
Store
bottleneck
q Area proportional to “ports” Port 2
Cache
Port 2 Copy 2 Data
Load
174
Handling Multiple Accesses per Cycle (III)
n Banking (Interleaving)
q Bits in address determines which bank an address maps to
n Address space partitioned into separate banks
n Which bits to use for “bank address”?
+ No increase in data store area
-- Cannot satisfy multiple accesses Bank 0:
to the same bank Even
addresses
-- Crossbar interconnect in input/output
n Bank conflicts
q Two accesses are to the same bank Bank 1:
Odd
q How can these be reduced?
addresses
n Hardware? Software?
175
General Principle: Interleaving
n Interleaving (banking)
q Problem: a single monolithic memory array takes long to
access and does not enable multiple accesses in parallel
n One Pager: Glew, “MLP Yes! ILP No!,” ASPLOS Wild and
Crazy Ideas Session, 1998.
177
Multi-Core Issues in Caching
Caches in Multi-Core Systems
n Cache efficiency becomes even more important in a multi-
core/multi-threaded system
q Memory bandwidth is at premium
q Cache space is a limited resource
n Many decisions
q Shared vs. private caches
q How to maximize performance of the entire system?
q How to provide QoS to different threads in a shared cache?
q Should cache management algorithms be aware of threads?
q How should space be allocated to threads in a shared cache?
179
Private vs. Shared Caches
n Private cache: Cache belongs to one core (a shared block can be in
multiple caches)
n Shared cache: Cache is shared by multiple cores
L2 L2 L2 L2
CACHE CACHE CACHE CACHE L2
CACHE
180
Resource Sharing Concept and Advantages
n Idea: Instead of dedicating a hardware resource to a
hardware context, allow multiple contexts to use it
q Example resources: functional units, pipeline, caches, buses,
memory
n Why?
L2 L2 L2 L2
CACHE CACHE CACHE CACHE L2
CACHE
183
Shared Caches Between Cores
n Advantages:
q High effective capacity
q Dynamic partitioning of available cache space
n No fragmentation due to static partitioning
q Easier to maintain coherence (a cache block is in a single location)
q Shared data and locks do not ping pong between caches
n Disadvantages
q Slower access
q Cores incur conflict misses due to other cores’ accesses
n Misses due to inter-core interference
n Some cores can destroy the hit rate of other cores
q Guaranteeing a minimum level of service (or fairness) to each core is harder
(how much space, how much bandwidth?)
184
Shared Caches: How to Share?
n Free-for-all sharing
q Placement/replacement policies are the same as a single core
system (usually LRU or pseudo-LRU)
q Not thread/application aware
q An incoming block evicts a block regardless of which threads
the blocks belong to
n Problems
q Inefficient utilization of cache: LRU is not the best policy
q A cache-unfriendly application can destroy the performance of
a cache friendly application
q Not all applications benefit equally from the same amount of
cache: free-for-all might prioritize those that do not benefit
q Reduced performance, reduced fairness
185
Example: Utility Based Shared Cache Partitioning
n Goal: Maximize system throughput
n Observation: Not all threads/applications benefit equally from
caching à simple LRU replacement not good for system
throughput
n Idea: Allocate more cache space to applications that obtain the
most benefit from more space
Shared Shared
Memory Shared L3 Cache Memory
Shared Control Control
Interconnect
Core 1 Core 2 Core 3
Shared Shared Shared
Shared L3 Cache
Shared L3 Cache
Shared Memory
Shared Memory
L2 Cache L2 Cache L2 Cache
Shared Memory
187
Need for QoS and Shared Resource Mgmt.
n Why is unpredictable performance (or lack of QoS) bad?
189
Shared Hardware Resources
n Memory subsystem (in both multithreaded and multi-core
systems)
q Non-private caches
q Interconnects
q Memory controllers, buses, banks