transformers and attention models
transformers and attention models
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 1 April 25, 2024
Administrative
● Assignment 2 due 05/06
○ Covering PyTorch, the main deep learning framework used by AI researchers + what
we recommend for your projects!
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 2 April 25, 2024
Last Time: Recurrent Neural Networks
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 3 April 25, 2024
Last Time: Variable length computation
graph with shared weights
y1 L1
h0 fW h1
x1
W
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 4 April 25, 2024
Last Time: Variable length computation
graph with shared weights
y1 L1 y2 L2
h0 fW h1 fW h2
x1 x2
W W is reused (recurrently)!
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 5 April 25, 2024
Last Time: Variable length computation L
graph with shared weights
y1 L1 y2 L2 y3 L3 yT LT
h0 fW h1 fW h2 fW h3
… hT
x1 x2 x3
W Calculate total loss across all
timesteps to find dW/dL
(backpropagation through
time)!
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 6 April 25, 2024
Sequence to Sequence with RNNs: Encoder - Decoder
Input: Sequence x1, … xT A motivating example for today’s discussion –
machine translation! English → Italian
Output: Sequence y1, …, yT’
h1 h2 h3 h4
x1 x2 x3 x4
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 7 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT
Output: Sequence y1, …, yT’
h1 h2 h3 h4 s0
x1 x2 x3 x4 c
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 8 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo
h1 h2 h3 h4 s0 s1
x1 x2 x3 x4 c y0
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 9 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo il
h1 h2 h3 h4 s0 s1 s2
x1 x2 x3 x4 c y0 y1
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 10 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo il cielo [STOP]
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c y0 y1 y2 y3
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 11 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’
Remember: vediamo il cielo [STOP]
During Test-time:
h1 h3 s0 s3 s4
h2
We sample
h4
from the model’s outputs until s1 s2
we sample [STOP]
x1 x2 x3 x4 c y0 y1 y2 y3
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 12 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo il cielo [STOP]
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c y0 y1 y2 y3
we see the sky Q: Are there any problems [START] vediamo il cielo
with using C like this??
Sutskever et al, “Sequence to sequence learning with neural networks”, NeurIPS 2014
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 13 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo il cielo [STOP]
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c y0 y1 y2 y3
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 14 April 25, 2024
Sequence to Sequence with RNNs
Input: Sequence x1, … xT Decoder: st = gU(yt-1, st-1, c)
Output: Sequence y1, …, yT’ vediamo il cielo [STOP]
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c y0 y1 y2 y3
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 15 April 25, 2024
Sequence to Sequence with RNNs and Attention
Input: Sequence x1, … xT
Output: Sequence y1, …, yT’
h1 h2 h3 h4 s0
x1 x2 x3 x4
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 16 April 25, 2024
Sequence to Sequence with RNNs and Attention
Compute (scalar) alignment scores
et,i = fatt(st-1, hi) (fatt is a Linear Layer)
h1 h2 h3 h4 s0
x1 x2 x3 x4
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 17 April 25, 2024
Sequence to Sequence with RNNs and Attention
Compute (scalar) alignment scores
a11 a12 a13 a14
et,i = fatt(st-1, hi) (fatt is a Linear Layer)
h1 h2 h3 h4 s0
x1 x2 x3 x4
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 18 April 25, 2024
Sequence to Sequence with RNNs and Attention
Compute (scalar) alignment scores
a11 a12 a13 a14
et,i = fatt(st-1, hi) (fatt is a Linear Layer)
vediamo
Normalize alignment scores
softmax
to get attention weights
From final hidden state: y1
e11 e13 e14 0 < at,i < 1 ∑iat,i = 1
e12
Initial decoder state s0
Compute context vector as
weighted sum of hidden
h1 h2 h3 h4 s0 + s1 states
ct = ∑iat,ihi
x1 x2 x3 x4 c1 y0
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 19 April 25, 2024
Sequence to Sequence with RNNs and Attention
Compute (scalar) alignment scores
a11 a12 a13 a14
et,i = fatt(st-1, hi) (fatt is a Linear Layer)
vediamo
Normalize alignment scores
softmax
to get attention weights
From final hidden state: y1
e11 e12 e13 e14 0 < at,i < 1 ∑iat,i = 1
Initial decoder state s0
Compute context vector as
weighted sum of hidden
h1 h2 h3 h4 s0 + s1 states
ct = ∑iat,ihi
Intuition: Context
vector attends to the Use context vector in
x1 x2 x3 x4 relevant part of the c1 y0
decoder: st = gU(yt-1, st-1, ct)
input sequence
we see the sky “vediamo” = “we see”
so maybe a11=a12=0.45, [START]
a13=a14=0.05
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 20 April 25, 2024
Sequence to Sequence with RNNs and Attention
Compute (scalar) alignment scores
a11 a12 a13 a14
et,i = fatt(st-1, hi) (fatt is a Linear Layer)
vediamo
Normalize alignment scores
softmax
to get attention weights
From final hidden state: y1
e11 e12 e13 e14 0 < at,i < 1 ∑iat,i = 1
Initial decoder state s0
Compute context vector as
weighted sum of hidden
h1 h2 h3 h4 s0 + s1 states
ct = ∑iat,ihi
Intuition: Context
vector attends to the Use context vector in
x1 x2 x3 x4 relevant part of the c1 y0
decoder: st = gU(yt-1, st-1, ct)
input sequence This is all differentiable! No
we see the sky “vediamo” = “we see” supervision on attention
so maybe a11=a12=0.45, [START]
weights – backprop through
a13=a14=0.05 everything
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 21 April 25, 2024
Sequence to Sequence with RNNs and Attention
Repeat: Use s1 to compute
new context vector c2
a21 a22 a23 a24 Compute (scalar)
vediamo
alignment scores
softmax et,i = fatt(st-1, hi)
y1
(fatt is a Linear Layer)
e21 e22 e23 e24
h1 h2 h3 h4 s0 s1
x1 x2 x3 x4 c1 y0 c2
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 22 April 25, 2024
Sequence to Sequence with RNNs and Attention
Repeat: Use s1 to compute
new context vector c2
a21 a22 a23 a24
vediamo il
softmax
y1 y2
e21 e22 e23 e24
x1 x2 x3 x4 c1 y0 c2 y1
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 23 April 25, 2024
Sequence to Sequence with RNNs and Attention
Repeat: Use s1 to compute
new context vector c2
a21 a22 a23 a24
vediamo il
softmax
y1 y2
e21 e22 e23 e24
“il” = “the”
we see the sky so maybe a21=a22=0.05,
[START] vediamo
a24=0.1, a23=0.8
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 24 April 25, 2024
Sequence to Sequence with RNNs and Attention
Use a different context vector in each timestep of decoder
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c1 y0 c2 y1 c3 y2 c4 y3
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 25 April 25, 2024
Sequence to Sequence with RNNs and Attention
Example: English to French Visualize attention weights at,i
translation
at1 at2 at3 at4
softmax
h1 h2 h3 h4 st
x1 x2 x3 x4
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 26 April 25, 2024
Sequence to Sequence with RNNs and Attention
Visualize attention weights at,i
Example: English to French
translation
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 27 April 25, 2024
Sequence to Sequence with RNNs and Attention
Visualize attention weights at,i
Example: English to French
translation
Diagonal attention means
Input: “The agreement on words correspond in order
the European Economic Area
was signed in August 1992.”
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 28 April 25, 2024
Sequence to Sequence with RNNs and Attention
Visualize attention weights at,i
Example: English to
French translation
Diagonal attention means
Input: “The agreement on words correspond in order
the European Economic
Area was signed in Attention figures out
different word orders
August 1992.”
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 29 April 25, 2024
Sequence to Sequence with RNNs and Attention
Context vectors don’t use the fact that hi form an ordered
sequence – it just treats them as an unordered set {h i}
vediamo il cielo [STOP]
General architecture + strategy given any set of input
hidden vectors {hi}! (calculate attention weights + sum) y1 y2 y3 y4
h1 h2 h3 h4 s0 s1 s2 s3 s4
x1 x2 x3 x4 c1 y0 c2 y1 c3 y2 c4 y3
Bahdanau et al, “Neural machine translation by jointly learning to align and translate”, ICLR 2015
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 30 April 25, 2024
Image Captioning using spatial features
Input: Image I
Output: Sequence y = y1, y2,..., yT An example network for image captioning
without attention
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 31 April 25, 2024
Image Captioning using spatial features
Input: Image I
Output: Sequence y = y1, y2,..., yT
Encoder: h0 = fW(z)
where z is spatial CNN features
fW(.) is an MLP
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 32 April 25, 2024
Image Captioning using spatial features
Input: Image I Decoder: ht = gV(yt-1, ht-1, c)
Output: Sequence y = y1, y2,..., yT where context vector c is often c = h0
and output yt = T(ht)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 33 April 25, 2024
Image Captioning using spatial features
Input: Image I Decoder: ht = gV(yt-1, ht-1, c)
Output: Sequence y = y1, y2,..., yT where context vector c is often c = h0
and output yt = T(ht)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 34 April 25, 2024
Image Captioning using spatial features
Input: Image I Decoder: ht = gV(yt-1, ht-1, c)
Output: Sequence y = y1, y2,..., yT where context vector c is often c = h0
and output yt = T(ht)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 35 April 25, 2024
Image Captioning using spatial features
Input: Image I Decoder: ht = gV(yt-1, ht-1, c)
Output: Sequence y = y1, y2,..., yT where context vector c is often c = h0
and output yt = T(ht)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 36 April 25, 2024
Image Captioning using spatial features
Input: Image I Decoder: ht = gV(yt-1, ht-1, c)
Output: Sequence y = y1, y2,..., yT where context vector c is often c = h0
and output yt = T(ht)
Q: What is the problem
Encoder: h0 = fW(z) with this setup? Think person wearing hat [END]
where z is spatial CNN features back to last time…
fW(.) is an MLP y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 37 April 25, 2024
Image Captioning using spatial features
Answer: Input is "bottlenecked" through c
- Model needs to encode everything it
wants to say within c
person wearing hat [END]
This is a problem if we want to generate
really long descriptions? 100s of words long
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 38 April 25, 2024
Image Captioning with RNNs and Attention
gif source
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 39 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores:
Compute alignments HxW
scores (scalars):
e1,0,0 e1,0,1 e1,0,2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 40 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores: Attention:
Compute alignments Normalize to get
HxW HxW
scores (scalars): attention weights:
e1,0,0 e1,0,1 e1,0,2 a1,0,0 a1,0,1 a1,0,2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 41 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores: Attention:
Compute alignments Normalize to get Compute context vector:
HxW HxW
scores (scalars): attention weights:
e1,0,0 e1,0,1 e1,0,2 a1,0,0 a1,0,1 a1,0,2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 42 April 25, 2024
Image Captioning with RNNs and Attention
Decoder: yt = gV(yt-1, ht-1, ct)
Each timestep of decoder uses a
New context vector at every time step
different context vector that looks at
different parts of the input image
person
y1
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 43 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores: Attention: Decoder: yt = gV(yt-1, ht-1, ct)
HxW HxW New context vector at every time step
e1,0,0 e1,0,1 e1,0,2 a1,0,0 a1,0,1 a1,0,2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 44 April 25, 2024
Image Captioning with RNNs and Attention
Decoder: yt = gV(yt-1, ht-1, ct)
Each timestep of decoder uses a
New context vector at every time step
different context vector that looks at
different parts of the input image
person wearing
y1 y2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 45 April 25, 2024
Image Captioning with RNNs and Attention
Decoder: yt = gV(yt-1, ht-1, ct)
Each timestep of decoder uses a
New context vector at every time step
different context vector that looks at
different parts of the input image
y1 y2 y3
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 46 April 25, 2024
Image Captioning with RNNs and Attention
Decoder: yt = gV(yt-1, ht-1, ct)
Each timestep of decoder uses a
New context vector at every time step
different context vector that looks at
different parts of the input image
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 47 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores: Attention: This entire process is differentiable.
HxW HxW - model chooses its own
e1,0,0 e1,0,1 e1,0,2
attention weights. No attention
a1,0,0 a1,0,1 a1,0,2
supervision is required
e1,1,0 e1,1,1 e1,1,2 a1,1,0 a1,1,1 a1,1,2
person wearing hat [END]
e1,2,0 e1,2,1 e1,2,2 a1,2,0 a1,2,1 a1,2,2
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 48 April 25, 2024
Image Captioning with Attention
Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015
Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with permission.
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 49 April 25, 2024
Image Captioning with RNNs and Attention
Alignment scores: Attention: A general and useful tool!
HxW HxW Calculating vectors that are learned,
e1,0,0 e1,0,1 e1,0,2
weighted averages over inputs and
a1,0,0 a1,0,1 a1,0,2
features
e1,1,0 e1,1,1 e1,1,2 a1,1,0 a1,1,1 a1,1,2
person wearing hat [END]
e1,2,0 e1,2,1 e1,2,2 a1,2,0 a1,2,1 a1,2,2
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 50 April 25, 2024
Attention we just saw in image captioning
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 51 April 25, 2024
Attention we just saw in image captioning
Operations:
Alignment: ei,j = fatt(h, zi,j)
Alignment
Inputs:
h Features: z (shape: H x W x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 52 April 25, 2024
Attention we just saw in image captioning
Attention
a1,0 a1,1 a1,2
Operations:
a2,0 a2,1 a2,2 Alignment: ei,j = fatt(h, zi,j)
Attention: a = softmax(e)
softmax
Alignment
Inputs:
h Features: z (shape: H x W x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 53 April 25, 2024
Attention we just saw in image captioning
c
Outputs:
context vector: c (shape: D)
mul + add
Attention
a1,0 a1,1 a1,2
Operations:
a2,0 a2,1 a2,2 Alignment: ei,j = fatt(h, zi,j)
Attention: a = softmax(e)
Output: c = ∑i,j ai,jzi,j
softmax
Alignment
Inputs:
h Features: z (shape: H x W x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 54 April 25, 2024
Attention we just saw in image captioning
c
Outputs:
context vector: c (shape: D)
mul + add
Attention
a1,0 a1,1 a1,2
Operations: from the attention
a2,0 a2,1 a2,2 Alignment: ei,j = fatt(h, zi,j) mechanism in
Attention: a = softmax(e)
Output: c = ∑i,j ai,jzi,j transformers?
softmax
z0,0 z0,1 z0,2 e0,0 e0,1 e0,2 We’ll go into that next,
Features
Alignment
Inputs:
h Features: z (shape: H x W x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 55 April 25, 2024
General attention layer – used in LLMs + beyond
c
Outputs:
context vector: c (shape: D)
mul + add
a0
Attention
a1 Operations:
a2
Alignment: ei = fatt(h, xi)
Attention: a = softmax(e)
Output: c = ∑i ai xi
softmax
Input vectors
x0 e0
Alignment
x1 e1
Attention operation is permutation invariant.
x2 e2 - Doesn't care about ordering of the features
- Stretch into N = H x W vectors
Inputs:
h Input vectors: x (shape: N x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 56 April 25, 2024
General attention layer
c
Outputs:
context vector: c (shape: D)
mul + add
a0
Attention
Change fatt(.) to a dot product, this actually
a1 Operations: can work well in practice, but a simple dot
a2
Alignment: ei = h ᐧ xi product can have some issues…
Attention: a = softmax(e)
Output: c = ∑i ai xi
softmax
Input vectors
x0 e0
Alignment
x1 e1
x2 e2
Inputs:
h Input vectors: x (shape: N x D)
Query: h (shape: D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 57 April 25, 2024
General attention layer
c
Outputs:
context vector: c (shape: D)
mul + add
Change fatt(.) to a scaled simple dot product
a0 - Larger dimensions means more terms in the
Attention
dot product sum.
a1 Operations: - So, the variance of the logits is higher. Large
a2
Alignment: ei = h ᐧ xi / √D magnitude vectors will produce much higher
Attention: a = softmax(e) logits.
Output: c = ∑i ai xi - So, the post-softmax distribution has lower-
softmax entropy, assuming logits are IID.
Input vectors
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 58 April 25, 2024
General attention layer
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: D)
Multiple query vectors
- each query creates a new,
a0,0 a0,1 a0,2
corresponding output context
Attention
a1,0 a1,1 a1,2 vector
Operations:
a2,0 a2,1 a2,2
Alignment: ei,j = qj ᐧ xi / √D
Attention: a = softmax(e)
Output: yj = ∑i ai,j xi Allows us to compute multiple attention
softmax (↑) context vectors at once
Input vectors
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 59 April 25, 2024
General attention layer
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: D)
Attention
a1,0 a1,1 a1,2
Operations: both the alignment as well as the
Alignment: ei,j = qj ᐧ xi / √D attention calculations.
a2,0 a2,1 a2,2
Attention: a = softmax(e) - We can add more expressivity to
Output: yj = ∑i ai,j xi the layer by adding a different FC
softmax (↑) layer before each of the two steps.
Input vectors
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Queries: q (shape: M x D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 60 April 25, 2024
General attention layer
v0
Notice that the input vectors are used for
v1 Operations: both the alignment as well as the
Key vectors: k = xWk attention calculations.
v2 - We can add more expressivity to
Value vectors: v = xWv
the layer by adding a different FC
layer before each of the two steps.
Input vectors
x0 k0
x1 k1
x2 k2
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Queries: q (shape: M x Dk)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 61 April 25, 2024
General attention layer
y0 y1 y2
Outputs:
The input and output dimensions can
context vectors: y (shape: Dv)
mul(→) + add (↑) now change depending on the key and
value FC layers
v0 a0,0 a0,1 a0,2
Attention
v1 a1,0 a1,1 a1,2
Operations: Since the alignment scores are just
Key vectors: k = xWk scalars, the value vectors can be any
v2 a2,0 a2,1 a2,2
dimension we want
Value vectors: v = xWv
Alignment: ei,j = qj ᐧ ki / √D
softmax (↑) Attention: a = softmax(e)
Input vectors
Output: yj = ∑i ai,j vi
x0 k0 e0,0 e0,1 e0,2
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Queries: q (shape: M x Dk)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 62 April 25, 2024
General attention layer This is a working example of how we could use an
attention layer + CNN encoder for image captioning
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: Dv)
Attention
v1 a1,0 a1,1 a1,2
Operations: Encoder: h0 = fW(z)
Key vectors: k = xWk
v2 a2,0 a2,1 a2,2
Value vectors: v = xWv
where z is spatial CNN features
Alignment: ei,j = qj ᐧ ki / √D fW(.) is an MLP
softmax (↑) Attention: a = softmax(e)
z0,0 z0,1 z0,2
Input vectors
Output: yj = ∑i ai,j vi h0
x0 k0 e0,0 e0,1 e0,2
CNN z1,0 z1,1 z1,2 MLP
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D) We used h0 as q0 previously
Queries: q (shape: M x Dk)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 63 April 25, 2024
Lecture 8:
Video Lecture Supplement
Attention and Transformers
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 64 April 25, 2024
Next: The Self-attention Layer
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: Dv)
Attention
v1 a1,0 a1,1 a1,2
Operations:
Idea: leverages the
v2 a2,0 a2,1 a2,2
Key vectors: k = xWk strengths of attention
Value vectors: v = xWv
Alignment: ei,j = qj ᐧ ki / √D
layers without the need
softmax (↑) Attention: a = softmax(e) for separate query
Input vectors
Output: yj = ∑i ai,j vi
vectors.
x0 k0 e0,0 e0,1 e0,2
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Queries: q (shape: M x Dk)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 65 April 25, 2024
Self attention layer
Attention: a = softmax(e)
x0 Output: yj = ∑i ai,j vi
x1
x2
No input query vectors anymore
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Queries: q (shape: M x Dk)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 66 April 25, 2024
Self attention layer
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: Dv)
Attention
v1 a1,0 a1,1 a1,2
Operations:
Key vectors: k = xWk
v2 a2,0 a2,1 a2,2
Value vectors: v = xWv
Query vectors: q = xWq
softmax (↑) Alignment: ei,j = qj ᐧ ki / √D
Input vectors
Attention: a = softmax(e)
x0 k0 e0,0 e0,1 e0,2 Output: yj = ∑i ai,j vi
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 67 April 25, 2024
Self attention layer - attends over sets of inputs
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: Dv)
Attention
y0 y1 y2
v1 a1,0 a1,1 a1,2
Operations:
Key vectors: k = xWk
v2 a2,0 a2,1 a2,2
Value vectors: v = xWv
self-attention
Query vectors: q = xWq
softmax (↑) Alignment: ei,j = qj ᐧ ki / √D x0 x1 x2
Input vectors
Attention: a = softmax(e)
x0 k0 e0,0 e0,1 e0,2 Output: yj = ∑i ai,j vi
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 68 April 25, 2024
Self attention layer - attends over sets of inputs
y1 y0 y2 y2 y1 y0 y0 y1 y2
x1 x0 x2 x2 x1 x0 x0 x1 x2
Permutation equivariant
Problem: How can we encode ordered sequences like language or spatially ordered image features?
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 69 April 25, 2024
Positional encoding
y0 y1 y2
self-attention
x0 x1 x2
p0 p1 p2
position encoding
Possible desirable properties of pos(.) :
x0 x1 x2 1. It should output a unique encoding for each time-
step (word’s position in a sentence)
Concatenate or add special positional 2. Distance between any two time-steps should be
encoding pj to each input vector xj consistent across sentences with different lengths.
3. Our model should generalize to longer sentences
We use a function pos: N →Rd without any efforts. Its values should be bounded.
to process the position j of the vector 4. It must be deterministic.
into a d-dimensional vector
So, pj = pos(j)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 70 April 25, 2024
Positional encoding
Options for pos(.)
y0 y1 y2
1. Learn a lookup table:
○ Learn parameters to use for pos(t) for t ε [0, T)
self-attention ○ Lookup table contains T x d parameters.
x0 x1 x2
p0 p1 p2
position encoding
Possible desirable properties of pos(.) :
x0 x1 x2 1. It should output a unique encoding for each time-
step (word’s position in a sentence)
Concatenate special positional 2. Distance between any two time-steps should be
encoding pj to each input vector xj consistent across sentences with different lengths.
3. Our model should generalize to longer sentences
We use a function pos: N →Rd without any efforts. Its values should be bounded.
to process the position j of the 4. It must be deterministic.
vector into a d-dimensional vector
So, pj = pos(j) Vaswani et al, “Attention is all you need”, NeurIPS 2017
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 71 April 25, 2024
Positional encoding
Options for pos(.)
y0 y1 y2
1. Learn a lookup table:
○ Learn parameters to use for pos(t) for t ε [0, T)
self-attention ○ Lookup table contains T x d parameters.
position encoding
x1 x0 x2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 72 April 25, 2024
Positional encoding
Options for pos(.)
y0 y1 y2
1. Learn a lookup table:
○ Learn parameters to use for pos(t) for t ε [0, T)
self-attention ○ Lookup table contains T x d parameters.
x0 x1 x2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 73 April 25, 2024
Masked self-attention layer
y0 y1 y2
Outputs:
mul(→) + add (↑)
context vectors: y (shape: Dv)
- Allows us to parallelize
v0 a0,0 a0,1 a0,2
attention across time
Attention
v1 0 a1,1 a1,2
Operations: - Don’t need to calculate the
v2 0 0 a2,2
Key vectors: k = xWk context vectors from the
Value vectors: v = xWv previous timestep first!
Query vectors: q = xWq
softmax (↑) Alignment: ei,j = qj ᐧ ki / √D
- Prevent vectors from
Input vectors
Attention: a = softmax(e)
x0 k0 e0,0 e0,1 e0,2 Output: yj = ∑i ai,j vi looking at future vectors.
Alignment
Inputs:
q0 q1 q2 Input vectors: x (shape: N x D)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 74 April 25, 2024
Multi-head self-attention layer
- Multiple self-attention “heads” in parallel
y0 y1 y2
Q: Why do this?
x0 x1 x2 x0 x1 x2 x0 x1 x2
Split
x0 x1 x2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 75 April 25, 2024
Multi-head self-attention layer
- Multiple self-attention “heads” in parallel
y0 y1 y2
x0 x1 x2 x0 x1 x2 x0 x1 x2
Split
x0 x1 x2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 76 April 25, 2024
General attention versus self-attention
y0 y1 y2 y0 y1 y2
attention self-attention
k0 k1 k2 v0 v1 v2 q0 q1 q2 x0 x1 x2
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 77 April 25, 2024
Comparing RNNs to Transformer
RNNs
(+) LSTMs work reasonably well for long sequences.
(-) Expects an ordered sequences of inputs
(-) Sequential computation: subsequent hidden states can only be computed after the previous
ones are done.
Transformer:
(+) Good at long sequences. Each attention calculation looks at all inputs.
(+) Can operate over unordered sets or ordered sequences with positional encodings.
(+) Parallel computation: All alignment and attention scores for all inputs can be done in parallel.
(-) Requires a lot of memory: N x M alignment and attention scalers need to be calculated and
stored for a single self-attention head. (but GPUs are getting bigger and better)
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 78 April 25, 2024
“ImageNet Moment for Natural
Language Processing”
Pretraining:
Download a lot of text from the
internet
Finetuning:
Fine-tune the Transformer on your
own NLP task
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 79 April 25, 2024
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 80 April 25, 2024
Image Captioning using Transformers
Input: Image I
Output: Sequence y = y1, y2,..., yT
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 81 April 25, 2024
Image Captioning using Transformers
Input: Image I
Output: Sequence y = y1, y2,..., yT
Encoder: c = TW(z)
where z is spatial CNN features
TW(.) is the transformer encoder
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 82 April 25, 2024
Image Captioning using Transformers
Input: Image I Decoder: yt = TD(y0:t-1, c)
Output: Sequence y = y1, y2,..., yT where TD(.) is the transformer decoder
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 83 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
Positional encoding
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 84 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
Positional encoding
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 85 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 86 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
+ Residual connection
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 87 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
+ Residual connection
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 88 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
xN
MLP MLP over each vector individually
+ Residual connection
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 89 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
Transformer encoder
+ Residual connection
xN
MLP MLP over each vector individually
+ Residual connection
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 90 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
+ Residual connection
xN
MLP MLP over each vector individually
+ Residual connection
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 91 April 25, 2024
The Transformer encoder block
c0,0 c0,1 c0,2 ... c2,2
y0 y1 y2 y3
Transformer Encoder Block:
Layer norm
Inputs: Set of vectors x
Transformer encoder
+
Layer norm and MLP operate
independently per vector.
Multi-head self-attention
Highly scalable, highly
Positional encoding parallelizable, but high memory usage.
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 92 April 25, 2024
The Transformer decoder
person wearing hat [END]
y0 y1 y2 y3
Transformer decoder
Made up of N decoder blocks.
xN
c0,1
c0,2
...
y0 y1 y2 y3
[START] person wearing hat Vaswani et al, “Attention is all you need”, NeurIPS 2017
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 93 April 25, 2024
The Transformer y0 y1 y2 y3
decoder block FC
y0 y1 y2 y3
Transformer decoder
Let's dive into the
c0,0
transformer decoder block
xN
c0,1
c0,0
c0,2
c0,1
...
c0,2 c2,2
...
c2,2
y0 y1 y2 y3
x0 x1 x2 x3
[START] person wearing hat Vaswani et al, “Attention is all you need”, NeurIPS 2017
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 94 April 25, 2024
The Transformer y0 y1 y2 y3
decoder block FC
Layer norm
person wearing hat [END]
+
y0 y1 y2 y3
MLP
Layer norm
Transformer decoder
+ Most of the network is the
c0,0
same the transformer
xN encoder.
c0,1
c0,0
c0,2
Layer norm
c0,1
...
+
c0,2 c2,2
Masked Multi-head Ensures we only look at
self-attention
...
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 95 April 25, 2024
The Transformer y0 y1 y2 y3
decoder block FC
Layer norm
person wearing hat [END]
+
y0 y1 y2 y3
MLP
Layer norm
Transformer decoder
+ Multi-head attention block
c0,0
attends over the transformer
xN Multi-head attention encoder outputs.
c0,1 k v q
c0,0
c0,2 For image captioning, this is
Layer norm
c0,1 how we inject image
...
+
features into the decoder.
c0,2 c2,2
Masked Multi-head
self-attention
...
c2,2
y0 y1 y2 y3
x0 x1 x2 x3
[START] person wearing hat Vaswani et al, “Attention is all you need”, NeurIPS 2017
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 96 April 25, 2024
The Transformer y0 y1 y2 y3
decoder block FC
Layer norm
Transformer Decoder Block:
Transformer decoder
c0,0
+ interacts with past inputs.
xN Multi-head attention
c0,1 k v q Multi-head attention block is
c0,0
NOT self-attention. It attends
c0,2 over encoder outputs.
Layer norm
c0,1
...
+
Highly scalable, highly
c0,2 c2,2
Masked Multi-head parallelizable, but high memory
self-attention usage.
...
c2,2
y0 y1 y2 y3
x0 x1 x2 x3
[START] person wearing hat Vaswani et al, “Attention is all you need”, NeurIPS 2017
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 97 April 25, 2024
Image Captioning using transformers
- No recurrence at all
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 98 April 25, 2024
Image Captioning using transformers
- Perhaps we don't need
convolutions at all?
y1 y2 y3 y4
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 99 April 25, 2024
Image Captioning using ONLY transformers
- Transformers from pixels to language
y1 y2 y3 y4
Transformer encoder
y0 y1 y2 y3
...
Dosovitskiy et al, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”, ArXiv 2020
[START] person wearing hat
Colab link to an implementation of vision transformers
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 100 April 25, 2024
ViTs – Vision Transformers
Figure from:
Dosovitskiy et al, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”, ArXiv 2020
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 101 April 25, 2024
Vision Transformers vs. ResNets
Dosovitskiy et al, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale”, ArXiv 2020
Colab link to an implementation of vision transformers
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 102 April 25, 2024
Vision Transformers
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 103 April 25, 2024
ConvNets strike back!
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 104 April 25, 2024
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 105 April 25, 2024
Summary
- Adding attention to RNNs allows them to "attend" to different
parts of the input at every time step
- The general attention layer is a new type of layer that can be
used to design new neural network architectures
- Transformers are a type of layer that uses self-attention and
layer norm.
○ It is highly scalable and highly parallelizable
○ Faster training, larger models, better performance across
vision and language tasks
○ They are quickly replacing RNNs, LSTMs, and may(?) even
replace convolutions.
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 106 April 25, 2024
Next time: Object Detection + Segmentation
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 107 April 25, 2024
Appendix Slides from Previous Years
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 108 April 25, 2024
Image Captioning with Attention
Soft attention
Hard attention
(requires
reinforcement
learning)
Xu et al, “Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention”, ICML 2015
Figure copyright Kelvin Xu, Jimmy Lei Ba, Jamie Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Benchio, 2015. Reproduced with permission.
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 109 April 25, 2024
Example: CNN with Self-Attention
Input Image
CNN
Features:
CxHxW
Cat image is free to use under the Pixabay License
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 110 April 25, 2024
Example: CNN with Self-Attention
Queries:
C’ x H x W
Keys:
CNN C’ x H x W
1x1 Conv
Features:
CxHxW
Cat image is free to use under the Pixabay License
Values:
C’ x H x W
1x1 Conv
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 111 April 25, 2024
Example: CNN with Self-Attention
Attention Weights
Queries:
Transpose (H x W) x (H x W)
C’ x H x W
1x1 Conv
Features:
CxHxW
Cat image is free to use under the Pixabay License
Values:
C’ x H x W
1x1 Conv
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 112 April 25, 2024
Example: CNN with Self-Attention
Attention Weights
Queries:
Transpose (H x W) x (H x W)
C’ x H x W
1x1 Conv
Features:
CxHxW C’ x H x W
Cat image is free to use under the Pixabay License
Values:
C’ x H x W
x
1x1 Conv
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 113 April 25, 2024
Example: CNN with Self-Attention
Attention Weights
Queries:
Transpose (H x W) x (H x W)
C’ x H x W
1x1 Conv
Features:
CxHxW C’ x H x W
Cat image is free to use under the Pixabay License
Values:
C’ x H x W
x 1x1 Conv
1x1 Conv
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 114 April 25, 2024
Example: CNN with Self-Attention
Residual Connection
Attention Weights
Queries:
Transpose (H x W) x (H x W)
C’ x H x W
Values:
C’ x H x W
x 1x1 Conv
1x1 Conv
Self-Attention Module
Zhang et al, “Self-Attention Generative Adversarial Networks”, ICML 2018 Slide credit: Justin Johnson
Fei-Fei Li, Ehsan Adeli, Zane Durante Lecture 9 - 115 April 25, 2024