CS 105 Tour of the Black Holes of Computing Cache Memories
Topics
Generic cache-memory organization Direct-mapped caches Set-associative caches Impact of caches on performance
New Topic: Cache
Buffer, between processor and memory
Often several levels of caches
Small but fast
Old values will be removed from cache to make space for new values
Capitalizes on spatial locality and temporal locality
Spatial locality: If a value is used, nearby values are likely to be used Temporal locality: If a value is used, it is likely to be used again soon.
Parameters vary by system; unknown to programmer Cache friendly code
2 CS105
Cache Memories
Cache memories are small, fast SRAM-based memories managed automatically in hardware.
Hold frequently accessed blocks of main memory
CPU looks first for data in L1, then in L2, then in main memory. Typical bus structure:
CPU chip register file L1 cache cache bus ALU system bus memory bus I/O bridge main memory
CS105
L2 cache
3
bus interface
Inserting an L1 Cache Between the CPU and Main Memory
The transfer unit between the CPU register file and the cache is a 4-byte block
line 0 line 1
The tiny, very fast CPU register file has room for four 4-byte words
The transfer unit between the cache and main memory is a 4-word block (16 bytes) block 10
The small fast L1 cache has room for two 4-word blocks. It is an associative memory
abcd
...
block 21
pqrs
...
block 30
4
The big slow main memory has room for many 4-word blocks
wxyz
...
CS105
General Org of a Cache Memory
Cache is an array of sets
1 valid bit t tag bits per line per line B = 2b bytes per cache block
Each set contains one or more lines
Each line holds a block of data
S = 2s sets
valid
set 0: valid
tag
tag
0
0
1
1
B1
E lines per set B1
valid
tag tag
1 1
B1 B1
set 1:
valid
0 valid tag tag 0 valid 0 1 B1 1 B1
Set # hash code Tag hash key
set S-1:
Cache size: C = B x E x S data bytes
CS105
Addressing Caches
Address A: t bits
v set 0: v v set 1: v tag tag tag tag 0 0 0 0 v set S-1: tag 0 0 1 B1 1 1 1 1 B1 B1 B1 B1
m-1
s bits
b bits
0
<tag> <set index> <block offset>
The word at address A is in the cache if the tag bits in one of the <valid> lines in set <set index> match <tag>
tag
B1
The word contents begin at offset <block offset> bytes from the beginning of the block
CS105
Direct-Mapped Cache
Simplest kind of cache Characterized by exactly one line per set
set 0: set 1:
valid
tag
cache block cache block
E=1 lines per set
valid
tag
set S-1:
valid
tag
cache block
CS105
Accessing Direct-Mapped Caches
Set selection
Use the set index bits to determine the set of interest
set 0: selected set set 1:
valid valid
tag tag
cache block
cache block
m-1
t bits tag
s bits b bits 0 00 001 set index block offset
set S-1: valid
tag
cache block
CS105
Accessing Direct-Mapped Caches
Line matching and word selection
Line matching: Find a valid line in the selected set with a matching tag Word selection: Then extract the word
=1? (1) The valid bit must be set
0 1 2 3 4 5 6 7
selected set (i):
0110
w0
w1 w2
w3
(2) The tag bits in the cache =? line must match the tag bits in the address
m-1
t bits 0110 tag
s bits b bits 0 i 100 set index block offset
(3) If (1) and (2), then cache hit, and block offset selects starting byte
CS105
Direct-Mapped Cache Simulation
t=1 s=2 x xx b=1 x M=16 addressable bytes, B=2 bytes/block, S=4 sets, E=1 entry/set Address trace (reads): 0 [00002], 1 [00012], 13 [11012], 8 [10002], 0 [00002] 0 [00002] (miss) tag data
0 m[1] m[0] M[0-1]
v
1 1
13 [11012] (miss) v tag data
1 1 0 1 m[1] m[0] M[0-1] m[13] m[12] M[12-13]
(1)
(3)
1 1
v
1 1
8 [10002] (miss) tag data
1 m[9] m[8] M[8-9]
v
1 1
0 [00002] (miss) tag data
0 1 m[1] m[0] M[0-1] m[13] m[12] M[12-13] CS105
(4)
10
M[12-13]
(5)
1 1
Why Use Middle Bits as Index?
4-line Cache 0000x 0001x 0010x 0011x 0100x 0101x High-Order Bit Indexing 0110x Adjacent memory lines would 0111x map to same cache entry 1000x Poor use of spatial locality 1001x Middle-Order Bit Indexing 1010x Consecutive memory lines map 1011x to different cache lines 1100x Can hold C-byte region of address space in cache at one 1101x time 1110x 1111x
11
00x 01x 10x 11x
High-Order Bit Indexing 0000x 0001x 0010x 0011x 0100x 0101x 0110x 0111x 1000x 1001x 1010x 1011x 1100x 1101x 1110x 1111x
Middle-Order Bit Indexing
CS105
Set-Associative Caches
Characterized by more than one line per set
set 0:
valid
valid valid
tag
tag tag
cache block
cache block cache block E=2 lines per set
set 1:
valid
tag
cache block
set S-1:
valid valid
tag tag
cache block cache block
12
CS105
Accessing Set-Associative Caches
Set selection
Identical to direct-mapped cache
set 0: valid valid tag tag cache block cache block
Selected set
set 1:
valid
valid
tag
tag
cache block
cache block
valid
m-1
tag tag
cache block
t bits tag
s bits b bits 0 set S-1: 00 001 set index block offset
valid
cache block
13
CS105
Accessing Set Associative Caches
Line matching and word selection
Must compare the tag in each valid line in the selected set
=1? (1) The valid bit must be set
0 1 2 3 4 5 6 7
1 selected set (i): 1
1001 0110 w0 w1 w2 w3
(2) The tag bits in one of the cache lines must match the tag bits in the address
m-1
=?
(3) If (1) and (2), then cache hit, and block offset selects starting byte s bits b bits 0 i 100 set index block offset
CS105
t bits 0110 tag
14
Write Strategies
On a Hit Write Through: Write to cache and to memory Write Back: Write just to cache. Write to memory only when a block is replaced. Requires a dirty bit
On a miss: Write Allocate: Allocate a cache line for the value to be written
Write NoAllocate: Dont allocate a line
Some processors buffer writes: proceed to next instruction before write completes
15 CS105
Multi-Level Caches
Options: separate data and instruction caches, or a unified cache
Processor
Regs L1 d-cache L1 i-cache
Unified L2 Cache
Memory
disk
size: speed: $/Mbyte: line size:
200 B 3 ns
8-64 KB 3 ns
8B 32 B larger, slower, cheaper
1-4MB SRAM 6 ns $100/MB 32 B
128 MB DRAM 60 ns $1.50/MB 8 KB
30 GB 8 ms $0.05/MB
16
CS105
Intel Pentium Cache Hierarchy
L1 Data 1 cycle latency 16 KB 4-way assoc Write-through 32B lines
Regs.
L1 Instruction 16 KB, 4-way 32B lines Processor Chip
L2 Unified 128KB2 MB 4-way assoc Write-back Write allocate 32B lines
Main Memory Up to 4GB
17
CS105
Cache Performance Metrics
Miss Rate
Fraction of memory references not found in cache (misses/references) Typical numbers:
3-10% for L1 Can be quite small (e.g., < 1%) for L2, depending on size, etc.
Hit Time
Time to deliver a line in the cache to the processor (includes time to determine whether the line is in the cache)
Typical numbers:
1 clock cycle for L1 3-8 clock cycles for L2
Miss Penalty
Additional time required because of a miss
Typically 25-100 cycles for main memory
Average Access Time = Hit Time + Miss Rate * Miss Penalty
18 CS105
Writing Cache-Friendly Code
Repeated references to variables are good (temporal locality)
Stride-1 reference patterns are good (spatial locality)
Examples:
Cold cache, 4-byte words, 4-word cache blocks
int sumarraycols(int a[M][N]) { int i, j, sum = 0; for (j = 0; j < N; j++) for (i = 0; i < M; i++) sum += a[i][j]; return sum; }
int sumarrayrows(int a[M][N]) { int i, j, sum = 0; for (i = 0; i < M; i++) for (j = 0; j < N; j++) sum += a[i][j]; return sum; }
Miss rate = 1/4 = 25%
19
Miss rate = 100%
CS105
The Memory Mountain
Read throughput (read bandwidth)
Number of bytes read from memory per second (MB/s)
Memory mountain
Measured read throughput as a function of spatial and temporal locality Compact way to characterize memory system performance
20
CS105
Memory Mountain Test Function
/* The test function */ void test(int elems, int stride) { int i, result = 0; volatile int sink;
for (i = 0; i < elems; i += stride) result += data[i]; sink = result; /* So compiler doesn't optimize away the loop */ } /* Run test(elems, stride) and return read throughput (MB/s) */ double run(int size, int stride, double Mhz) { double cycles; int elems = size / sizeof(int); test(elems, stride); /* warm up the cache */ cycles = fcyc2(test, elems, stride, 0); /* call test(elems,stride) */ return (size / stride) / (cycles / Mhz); /* convert cycles to MB/s */ }
21 CS105
Memory Mountain Main Routine
/* mountain.c - Generate the memory mountain. */ #define MINBYTES (1 << 10) /* Working set size ranges from 1 KB */ #define MAXBYTES (1 << 23) /* ... up to 8 MB */ #define MAXSTRIDE 16 /* Strides range from 1 to 16 */ #define MAXELEMS MAXBYTES/sizeof(int) int data[MAXELEMS]; int main() { int size; int stride; double Mhz; /* The array we'll be traversing */
/* Working set size (in bytes) */ /* Stride (in array elements) */ /* Clock frequency */
init_data(data, MAXELEMS); /* Initialize each element in data to 1 */ Mhz = mhz(0); /* Estimate the clock frequency */ for (size = MAXBYTES; size >= MINBYTES; size >>= 1) { for (stride = 1; stride <= MAXSTRIDE; stride++) printf("%.1f\t", run(size, stride, Mhz)); printf("\n"); } exit(0); }
22 CS105
The Memory Mountain
1200
read throughput (MB/s)
1000
L1
800
Pentium III Xeon 550 MHz 16 KB on-chip L1 d-cache 16 KB on-chip L1 i-cache 512 KB off-chip unified L2 cache
600
400
xe
Slopes of Spatial Locality
L2
200
Ridges of Temporal Locality
s1
s3
s7
512k
s13
stride (words)
128k
s11
32k
s9
8k
s5
s15
23
8m
2m
2k
working set size (bytes)
mem
CS105
Ridges of Temporal Locality
Slice through the memory mountain with stride=1
Illuminates read throughputs of different caches and memory
1200 main memory region 1000 L2 cache region L1 cache region
read througput (MB/s)
800
600
400
200
8m
4m
2m
8k
4k
2k
64k
32k
512k
256k
24
1024k
128k
16k
1k
working set size (bytes)
CS105
A Slope of Spatial Locality
Slice through memory mountain with size=256KB
Shows cache block size
800 700 600 500 one access per cache line 400 300 200 100 0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 s11 s12 s13 s14 s15 s16 stride (words)
25
read throughput (MB/s)
CS105
Matrix-Multiplication Example
Major Cache Effects to Consider
Total cache size
Exploit temporal locality and keep the working set small (e.g., by using
blocking)
Block size
Exploit spatial locality
Description:
Multiply N x N matrices O(N3) total operations Accesses
N reads per source element
/* ijk */ Variable sum for (i=0; i<n; i++) { held in register for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } }
N values summed per destination
But may be able to hold in register
26 CS105
Miss-Rate Analysis for Matrix Multiply
Assume:
Line size = 32B (big enough for 4 64-bit words) Matrix dimension (N) is very large
Approximate 1/N as 0.0
Cache is not even big enough to hold multiple rows
Analysis Method:
Look at access pattern of inner loop
k i k j i j
27
CS105
Layout of C Arrays in Memory (review)
C arrays allocated in row-major order
Each row in contiguous memory locations for (i = 0; i < N; i++)
sum += a[0][i];
Stepping through columns in one row:
Accesses successive elements of size k bytes If block size (B) > k bytes, exploit spatial locality
compulsory miss rate = k bytes / B
Stepping through rows in one column:
for (i = 0; i < n; i++)
sum += a[i][0];
Accesses distant elements No spatial locality!
Compulsory miss rate = 1 (i.e. 100%)
CS105
28
Matrix Multiplication (ijk)
/* ijk */ for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } } Inner loop: (*,j)
(i,j)
(i,*) A B C
Row-wise
Columnwise
Fixed
Misses per Inner Loop Iteration:
A 0.25
29
B 1.0
C 0.0
CS105
Matrix Multiplication (jik)
/* jik */ for (j=0; j<n; j++) { for (i=0; i<n; i++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum } } Inner loop: (*,j) (i,j)
(i,*)
A B C
Misses per Inner Loop Iteration:
A 0.25
30
Row-wise Columnwise
Fixed
B 1.0
C 0.0
CS105
Matrix Multiplication (kij)
/* kij */ for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } } Inner loop:
(i,k) A B (k,*) (i,*) C
Fixed
Row-wise Row-wise
Misses per Inner Loop Iteration:
A 0.0
31
B 0.25
C 0.25
CS105
Matrix Multiplication (ikj)
/* ikj */ for (i=0; i<n; i++) { for (k=0; k<n; k++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } }
Inner loop:
(i,k) A B (k,*) (i,*) C
Fixed
Row-wise Row-wise
Misses per Inner Loop Iteration:
A 0.0
32
B 0.25
C 0.25
CS105
Matrix Multiplication (jki)
/* jki */ for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*,k) (k,j) (*,j)
Column wise
Fixed
Columnwise
Misses per Inner Loop Iteration:
A 1.0
33
B 0.0
C 1.0
CS105
Matrix Multiplication (kji)
/* kji */ for (k=0; k<n; k++) { for (j=0; j<n; j++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } } Inner loop: (*,k) (k,j) (*,j)
Columnwise
Fixed
Columnwise
Misses per Inner Loop Iteration:
A 1.0
34
B 0.0
C 1.0
CS105
Summary of Matrix Multiplication
ijk (& jik):
2 loads, 0 stores
kij (& ikj):
2 loads, 1 store
jki (& kji):
2 loads, 1 store
misses/iter = 1.25
for (i=0; i<n; i++) { for (j=0; j<n; j++) { sum = 0.0; for (k=0; k<n; k++) sum += a[i][k] * b[k][j]; c[i][j] = sum; } }
misses/iter = 0.5
for (k=0; k<n; k++) { for (i=0; i<n; i++) { r = a[i][k]; for (j=0; j<n; j++) c[i][j] += r * b[k][j]; } }
misses/iter = 2.0
for (j=0; j<n; j++) { for (k=0; k<n; k++) { r = b[k][j]; for (i=0; i<n; i++) c[i][j] += a[i][k] * r; } }
35
CS105
Pentium Matrix Multiply Performance
Miss rates are helpful but not perfect predictors
So What Happened -- Code scheduling matters
60
50
40
Cycles/iteration
30
20
kji jki kij ikj jik ijk
10
0 25 50 75 100 125 150 175 200 225 250 275 300 325 350 375 400
Array size (n)
36
CS105
Improving Temporal Locality by Blocking
Example: Blocked matrix multiplication
Block (in this context) does not mean cache block Instead, it means a sub-block within the matrix Example: N = 8; sub-block size = 4
A11 A12 A21 A22 X B11 B12 = B21 B22 C21 C22 C11 C12
Key idea: Sub-blocks (i.e., Axy) can be treated just like scalars C11 = A11B11 + A12B21 C21 = A21B11 + A22B21
37
C12 = A11B12 + A12B22 C22 = A21B12 + A22B22
CS105
Blocked Matrix Multiply (bijk)
for (jj=0; jj<n; jj+=bsize) { for (i=0; i<n; i++) for (j=jj; j < min(jj+bsize,n); j++) c[i][j] = 0.0; for (kk=0; kk<n; kk+=bsize) { for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; } } } }
38
CS105
Blocked Matrix Multiply Analysis
Innermost loop pair multiplies a 1 X bsize sliver of A by a bsize X bsize block of B and sums into 1 X bsize sliver of C Loop over i steps through n row slivers of A & C, using same B
for (i=0; i<n; i++) { for (j=jj; j < min(jj+bsize,n); j++) { sum = 0.0 for (k=kk; k < min(kk+bsize,n); k++) { sum += a[i][k] * b[k][j]; } c[i][j] += sum; Innermost } kk jj jj Loop Pair
i kk i
39
Update successive row sliver accessed elements of sliver bsize times block reused n times in succession CS105
Pentium Blocked Matrix Multiply Performance
Blocking (bijk and bikj) improves performance by a factor of two over unblocked versions (ijk and jik)
Relatively insensitive to array size
60
50
Cycles/iteration
40
30
20
10
kji jki kij ikj jik ijk bijk (bsize = 25) bikj (bsize = 25)
40
75 10 0 12 5 15 0 17 5 20 0 22 5 25 0 27 5 30 0 32 5 35 0 37 5 40 0
25
50
Array size (n)
CS105
Concluding Observations
Programmer can optimize for cache performance
How data structures are organized How data are accessed
Nested loop structure Blocking is a general technique
All systems favor cache-friendly code
Getting absolute optimum performance is very platformspecific
Cache sizes, line sizes, associativities, etc.
Can get most of the advantage with generic code
Keep working set reasonably small (temporal locality)
Use small strides (spatial locality)
41
CS105