Motion and Optical Flow
Slides from Ce Liu, Steve Seitz, Larry Zitnick, Ali
Farhadi
We live in a moving world
• Perceiving, understanding and predicting motion is an
important part of our daily lives
Motion and perceptual organization
• Even “impoverished” motion data can evoke a
strong percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis",
Perception and Psychophysics 14, 201-211, 1973.
Motion and perceptual organization
• Even “impoverished” motion data can evoke a
strong percept
G. Johansson, “Visual Perception of Biological Motion and a Model For Its Analysis",
Perception and Psychophysics 14, 201-211, 1973.
Seeing motion from a static picture?
http://www.ritsumei.ac.jp/~akitaoka/index-e.html
More examples
How is this possible?
• The true mechanism is to
be revealed
• FMRI data suggest that
illusion is related to some
component of eye
movements
• We don’t expect computer
vision to “see” motion from
these stimuli, yet
What do you see?
In fact, …
The cause of motion
• Three factors in imaging process
– Light
– Object
– Camera
• Varying either of them causes motion
– Static camera, moving objects (surveillance)
– Moving camera, static scene (3D capture)
– Moving camera, moving scene (sports, movie)
– Static camera, moving objects, moving light (time lapse)
Motion scenarios (priors)
Static camera, moving scene Moving camera, static scene
Moving camera, moving scene Static camera, moving scene, moving light
We still don’t touch these areas
How can we recover motion?
Recovering motion
• Feature-tracking
– Extract visual features (corners, textured areas) and “track” them over
multiple frames
• Optical flow
– Recover image motion at each pixel from spatio-temporal image
brightness variations (optical flow)
Two problems, one registration method
B. Lucas and T. Kanade. An iterative image registration technique with an application to
stereo vision. In Proceedings of the International Joint Conference on Artificial Intelligence, pp.
674–679, 1981.
Feature tracking
• Challenges
– Figure out which features can be tracked
– Efficiently track across frames
– Some points may change appearance over time
(e.g., due to rotation, moving into shadows, etc.)
– Drift: small errors can accumulate as appearance
model is updated
– Points may appear or disappear: need to be able
to add/delete tracked points
Feature tracking
I(x,y,t) I(x,y,t+1)
• Given two subsequent frames, estimate the point
translation
• Key assumptions of Lucas-Kanade Tracker
• Brightness constancy: projection of the same point looks the same in
every frame
• Small motion: points do not move very far
• Spatial coherence: points move like their neighbors
The brightness constancy constraint
I(x,y,t) I(x,y,t+1)
• Brightness Constancy Equation:
I ( x, y , t ) = I ( x + u, y + v, t + 1)
Take Taylor expansion of I(x+u, y+v, t+1) at (x,y,t) to linearize the right side:
Image derivative along x Difference over frames
I ( x + u, y + v, t + 1) ≈ I ( x, y , t ) + I x ⋅ u + I y ⋅ v + I t
I ( x + u, y + v, t + 1) − I ( x, y , t ) = + I x ⋅ u + I y ⋅ v + I t
I x ⋅ u + I y ⋅ v + I t ≈ 0 → ∇I ⋅ [u v] + I t = 0
T
So:
The brightness constancy constraint
Can we use this equation to recover image motion (u,v) at each
pixel?
∇I ⋅ [u v] + I t = 0
T
• How many equations and unknowns per pixel?
•One equation (this is a scalar equation!), two unknowns (u,v)
The component of the motion perpendicular to the gradient
(i.e., parallel to the edge) cannot be measured
If (u, v) satisfies the equation, gradient
so does (u+u’, v+v’ ) if
(u,v)
∇I ⋅ [u ' v'] = 0
T
(u+u’,v+v’)
(u’,v’)
edge
The aperture problem
Actual motion
The aperture problem
Perceived motion
The barber pole illusion
http://en.wikipedia.org/wiki/Barberpole_illusion
The barber pole illusion
http://en.wikipedia.org/wiki/Barberpole_illusion
Solving the ambiguity…
B. Lucas and T. Kanade. An iterative image registration technique with an application to stereo vision. In Proceedings of the
International Joint Conference on Artificial Intelligence, pp. 674–679, 1981.
• How to get more equations for a pixel?
• Spatial coherence constraint
• Assume the pixel’s neighbors have the same (u,v)
– If we use a 5x5 window, that gives us 25 equations per pixel
Solving the ambiguity…
• Least squares problem:
Matching patches across images
• Overconstrained linear system
Least squares solution for d given by
The summations are over all pixels in the K x K window
Conditions for solvability
Optimal (u, v) satisfies Lucas-Kanade equation
When is this solvable? I.e., what are good points to track?
• ATA should be invertible
• ATA should not be too small due to noise
– eigenvalues λ1 and λ 2 of ATA should not be too small
• ATA should be well-conditioned
– λ 1/ λ 2 should not be too large (λ 1 = larger eigenvalue)
Does this remind you of anything?
Criteria for Harris corner detector
Aperture problem
Corners Lines Flat regions
28
Errors in Lukas-Kanade
• What are the potential causes of errors in this procedure?
– Suppose ATA is easily invertible
– Suppose there is not much noise in the image
When our assumptions are violated
• Brightness constancy is not satisfied
• The motion is not small
• A point does not move like its neighbors
– window size is too large
– what is the ideal window size?
31
Dealing with larger movements:
Iterative refinement Original (x,y) position
1. Initialize (x’,y’) = (x,y) It = I(x’, y’, t+1) - I(x, y, t)
2. Compute (u,v) by
2nd moment matrix for feature
displacement
patch in first image
3. Shift window by (u, v): x’=x’+u; y’=y’+v;
4. Recalculate It
5. Repeat steps 2-4 until small change
• Use interpolation for subpixel values
Revisiting the small motion
assumption
• Is this motion small enough?
– Probably not—it’s much larger than one pixel (2nd order terms dominate)
– How might we solve this problem?
33
Reduce the resolution!
34
Coarse-to-fine optical flow estimation
run iterative L-K
warp & upsample
run iterative L-K
.
.
.
image J1 image
image I2
Gaussian pyramid of image 1 (t) Gaussian pyramid of image 2 (t+1)
• Top Level
A Few Details
– Apply L-K to get a flow field representing the flow from
the first frame to the second frame.
– Apply this flow field to warp the first frame toward the
second frame.
– Rerun L-K on the new warped image to get a flow field
from it to the second frame.
– Repeat till convergence.
• Next Level
– Upsample the flow field to the next level as the first
guess of the flow at that level.
– Apply this flow field to warp the first frame toward the
second frame.
– Rerun L-K and warping till convergence as above.
• Etc.
36
Coarse-to-fine optical flow estimation
u=1.25 pixels
u=2.5 pixels
u=5 pixels
image H
1 u=10 pixels image
image I2
Gaussian pyramid of image 1 Gaussian pyramid of image 2
The Flower Garden Video
What should the
optical flow be?
38
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Optical Flow Results
* From Khurram Hassan-Shafique CAP5415 Computer Vision 2003
Flow quality evaluation
Flow quality evaluation
Flow quality evaluation
• Middlebury flow page
– http://vision.middlebury.edu/flow/
Ground Truth
Flow quality evaluation
• Middlebury flow page
– http://vision.middlebury.edu/flow/
Lucas-Kanade flow Ground Truth
Flow quality evaluation
• Middlebury flow page
– http://vision.middlebury.edu/flow/
Best-in-class alg Ground Truth
Video stabilization
Video denoising
Video super resolution
Robust Visual Motion Analysis:
Piecewise-Smooth Optical Flow
Ming Ye
Electrical Engineering
University of Washington
49
Estimating Piecewise-Smooth Optical Flow
with Global Matching and Graduated Optimization
Problem Statement:
Assuming only brightness conservation and
piecewise-smooth motion, find the optical flow
to best describe the intensity change in three
frames.
50
Approach: Matching-Based Global
Optimization
• Step 1. Robust local gradient-based method for
high-quality initial flow estimate.
• Step 2. Global gradient-based method to improve the
flow-field coherence.
• Step 3. Global matching that minimizes energy by a
greedy approach.
51
TT: Translating Tree
150x150 (Barron 94)
e∠ ( ) e|•| ( pix ) BA
e ( pix )
S3
BA 2.60 0.128 0.0724
S3 0.248 0.0167 0.00984
e: error in pixels, cdf: culmulative distribution function for all pixels
52
DT: Diverging Tree
150x150 (Barron 94)
e∠ ( ) e|•| ( pix ) BA
e ( pix )
S3
BA 6.36 0.182 0.114
S3 2.60 0.0813 0.0507
53
YOS: Yosemite Fly-Through
316x252 (Barron, cloud excluded)
BA
e∠ ( ) e|•| ( pix ) S3
e ( pix )
BA 2.71 0.185 0.118
S3 1.92 0.120 0.0776
54
TAXI: Hamburg Taxi
256x190, (Barron 94) LMS BA
max speed 3.0 pix/frame
Ours Error map Smoothness error
55
Traffic
512x512
(Nagel)
max speed:
6.0 pix/frame
BA
Ours Error map Smoothness error 56
FG: Flower Garden
360x240 (Black) BA LMS
Max speed: 7pix/frame
Ours Error map Smoothness error
57
Summary
• Major contributions from Lucas, Tomasi, Kanade
– Tracking feature points
– Optical flow
– Stereo
– Structure from motion
• Key ideas
– By assuming brightness constancy, truncated Taylor
expansion leads to simple and fast patch matching across
frames
– Coarse-to-fine registration
– Global approach by former EE student Ming Ye