Lec5. Keypoint Detection
Lec5. Keypoint Detection
Image matching
by Diva Sian
by swashford
Harder case
no chance to match!
Basic steps:
1) Detect distinctive interest points
2) Extract invariant descriptors
E (u , v) = w( x, y ) I ( x + u , y + v) − I ( x, y )
2
x, y
Source: R. Szeliski
Corner Detection by Auto-correlation
Change in appearance of window w(x,y) for shift [u,v]:
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
I(x, y)
E(u, v)
E(0,0)
w(x, y)
Corner Detection by Auto-correlation
Change in appearance of window w(x,y) for shift [u,v]:
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
I(x, y)
E(u, v)
E(3,2)
w(x, y)
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
Think-Pair-Share:
Correspond the three
red crosses to (b,c,d).
As a surface
Corner Detection by Auto-correlation
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
≈
Recall: Taylor series expansion
A function f can be represented by an infinite series
of its derivatives at a single point a:
Wikipedia
Approximation of
f(x) = ex
centered at f(0)
Local quadratic approximation of E(u,v) in the
neighborhood of (0,0) is given by the
second-order Taylor expansion:
Ignore function
value; set to 0 Ignore first Just look at
derivative, shape of
set to 0 second
derivative
Corner Detection: Mathematics
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
Euu (u, v) = 2 w( x, y )I x ( x + u, y + v) I x ( x + u, y + v)
x, y
Euv (u, v) = 2 w( x, y )I y ( x + u , y + v) I x ( x + u , y + v)
x, y
x, y
E vv (0,0) = 2 w( x, y )I y ( x, y ) I y ( x, y )
x, y
Euv (0,0) = 2 w( x, y )I x ( x, y ) I y ( x, y )
x, y
Corner Detection: Mathematics
E (u, v) = w( x, y) I ( x + u, y + v) − I ( x, y)
2
x, y
x , y
w( x, y ) I x ( x, y ) I y ( x, y ) w( x, y) I ( x, y) v
2
y
x, y
E (0,0) = 0
Eu (0,0) = 0
Ev (0,0) = 0
Euu (0,0) = 2 w( x, y )I x ( x, y ) I x ( x, y )
x, y
E vv (0,0) = 2 w( x, y )I y ( x, y ) I y ( x, y )
x, y
Euv (0,0) = 2 w( x, y )I x ( x, y ) I y ( x, y )
x, y
Corner Detection: Mathematics
The quadratic approximation simplifies to
I x2 IxI y
M = w( x, y ) 2
x, y I x I y I y
M
Corners as distinctive interest points
I x I x IxIy
M = w( x, y )
I
y x I IyIy
I I I I
Notation: Ix Iy IxI y
x y x y
James Hays
Interpreting the second moment matrix
The surface E(u,v) is locally approximated by a
quadratic form. Let’s try to understand its shape.
u
E (u , v) [u v] M
v
I x2 IxI y
M = w( x, y ) 2
x, y I x I y I y
James Hays
Interpreting the second moment matrix
u
Consider a horizontal “slice” of E(u, v): [u v] M = const
v
This is the equation of an ellipse.
James Hays
Interpreting the second moment matrix
I x2 I x I y 1 0
M = w( x, y ) 2
=
x, y I x I y I y 0 2
(max)-1/2
(min)-1/2
James Hays
Classification of image points using eigenvalues of M
2 “Edge”
2 >> 1 “Corner”
1 and 2 are large,
1 ~ 2;
E increases in all
directions
1
Classification of image points using eigenvalues of M
Cornerness
2 “Edge”
C = 12 − (1 + 2 ) 2
C<0 “Corner”
α: constant (0.04 to 0.06) C>0
|C| small
“Flat” “Edge”
region C<0
1
Classification of image points using eigenvalues of M
Cornerness
2 “Edge”
C = 12 − (1 + 2 ) 2
C<0 “Corner”
α: constant (0.04 to 0.06) C>0
Trace:
|C| small
“Flat” “Edge”
C = det( M ) − trace( M ) 2 region C<0
1
Harris corner detector
𝐼𝑥 𝐼𝑦
2. Compute 𝑀 components
as squares of derivatives.
𝐼𝑥2 𝐼𝑦2 𝐼𝑥𝑦
3. Gaussian filter g() with width s
4. Compute cornerness
2
𝐶 = det 𝑀 − 𝛼 trace 𝑀
2
= 𝑔 𝐼𝑥2 ∘ 𝑔 𝐼𝑦2 − 𝑔 𝐼𝑥 ∘ 𝐼𝑦
2
−𝛼 𝑔 𝐼𝑥2 +𝑔 𝐼𝑦2
𝑅 5. Threshold on 𝐶 to pick high cornerness
6. Non-maxima suppression to pick peaks.
Harris Detector: Steps
Harris Detector: Steps
Compute corner response 𝐶
Harris Detector: Steps
Find points with large corner response: 𝐶 > threshold
Harris Detector: Steps
Take only the points of local maxima of 𝐶
Harris Detector: Steps
Harris Detector: Properties
• Translation invariance?
• Translation invariance
• Rotation invariance?
Lecture 6 - 50 12-Oct-17
Harris Detector: Properties
• Translation invariance
• Rotation invariance
• Scale invariance?
Lecture 6 - 51 12-Oct-17
• How can we detect scale invariant
interest points?
How to cope with transformations?
• Exhaustive search
• Invariance
• Robustness
Exhaustive search
• Multi-scale approach
f Image 1 f Image 2
scale = 1/2
f Image 1 f Image 2
scale = 1/2
to scale factor.
Visual
61
K. Grauman, B. Leibe
Automatic Scale Selection
• Function responses for increasing scale (scale signature)
Objectand
Perceptual
Visual Augmented Computing
SensoryTutorial
Recognition
Lecture 7 - 68 17-Oct-17
What Is A Useful Signature Function f ?
• Functions for determining scale f = Kernel Image
Kernels:
L = s 2 ( Gxx ( x, y, s ) + Gyy ( x, y, s ) )
(Laplacian)
DoG = G ( x, y, ks ) − G ( x, y, s )
(Difference of Gaussians)
where Gaussian
x2 + y 2
− Note: both kernels are invariant
G( x, y, s ) = 1
2s
e 2s 2
to scale and rotation
Lecture 7 - 69 17-Oct-17
What Is A Useful Signature Function?
• Laplacian-of-Gaussian = “blob” detector
Objectand
Perceptual
Visual Augmented Computing
SensoryTutorial
Recognition
70
K. Grauman, B. Leibe
Characteristic scale
We define the characteristic scale as the scale
that produces peak of Laplacian response
characteristic scale
T. Lindeberg (1998). "Feature detection with automatic scale selection."
International Journal of Computer Vision 30 (2): pp 77--116. Source: Lana Lazebnik
Laplacian-of-Gaussian (LoG)
• Interest points:
s5
Local maxima in scale
space of Laplacian-of-
Augmented Computing
Gaussian s4
Lxx (s ) + Lyy (s ) s3
SensoryTutorial
Recognition
s2
List of
Objectand
s (x, y, σ)
Perceptual
Visual
K. Grauman, B. Leibe
Scale-space blob detector: Example
Ruye Wang
Alternative approach
Approximate LoG with Difference-of-Gaussian (DoG).
- =
K. Grauman, B. Leibe
Find local maxima in position-scale space of DoG
Find maxima
…
ks …
2ks
- =
s
List of
- ks = (x, y, s)
- =
s
Input image
K. Grauman, B. Leibe
Example of keypoint detection