0% found this document useful (0 votes)
53 views

05 Edges

Uploaded by

Huynh Viet Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

05 Edges

Uploaded by

Huynh Viet Trung
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 129

Lecture:

Edge Detection

Juan Carlos Niebles and Ranjay Krishna


Stanford Vision and Learning Lab

Stanford University Lecture 5 - 1 10-Oct-17


Continuing from last lecture
• Linear Systems
• Convolution
• Correlation

Stanford University Lecture 5 - 2 2-Feb-18


Linear Systems (filters)

• Linear filtering:
– Form a new image whose pixels are a weighted sum of
original pixel values
– Use the same set of weights at each point

• S is a linear system (function) iff it S satisfies

superposition property

Stanford University Lecture 5 - 3 6-Oct-16


2D impulse function
• 1 at [0,0].
• 0 everywhere else

Stanford University Lecture 5 - 4 6-Oct-16


LSI (linear shift invariant) systems
Impulse response

Stanford University Lecture 5 - 5 6-Oct-16


Why are convolutions flipped?
Let’s first represent f[0,0] as a sum of deltas:
𝑓 0,0 = 𝑓 0,0 ×1
= 𝑓[0,0]×𝛿[0,0]
? ?

= < < 𝑓[𝑘, 𝑙]×𝛿 𝑘, 𝑙


CAB? @AB?
Or
? ?

= < < 𝑓[𝑘, 𝑙]×𝛿 −𝑘, −𝑙


CAB? @AB?

Stanford University Lecture 5 - 6 10-Oct-17


Why are convolutions flipped?
Let’s first represent f[1,1] as a sum of deltas:

𝑓 1,1 = 𝑓 1,1 ×1
= 𝑓[1,1]×𝛿[0,0]
? ?

= < < 𝑓[𝑘, 𝑙]×𝛿 1 − 𝑘, 1 − 𝑙


CAB? @AB?

Stanford University Lecture 5 - 7 10-Oct-17


Why are convolutions flipped?
Now for the general case, let J s write
f n, m as a sum of deltas:

𝑓 𝑛, 𝑚 = 𝑓 𝑛, 𝑚 ×1
= 𝑓[𝑛, 𝑚]×𝛿[0,0]
? ?

= < < 𝑓[𝑘, 𝑙]×𝛿 𝑛 − 𝑘, 𝑚 − 𝑙


CAB? @AB?

Stanford University Lecture 5 - 8 10-Oct-17


Why are convolutions flipped?
• Now, let’s pass this function through a linear
shift invariant (LSI) system:
M
𝑓 𝑛, 𝑚 → S[f[n,m]]
S[f[n,m]] = S[ ∑?CAB? ∑?@AB? 𝑓[𝑘, 𝑙]×𝛿 𝑛 − 𝑘, 𝑚 − 𝑙 ]

Let’s apply the superposition property here:


= ∑? ∑ ?
CAB? @AB? 𝑓 𝑘, 𝑙 𝑆[𝛿 𝑛 − 𝑘, 𝑚 − 𝑙 ]

Stanford University Lecture 5 - 9 10-Oct-17


Why are convolutions flipped?
Copying the equation from the previous slide:
S[f[n,m]] = ∑? ∑ ?
CAB? @AB? 𝑓 𝑘, 𝑙 𝑆[𝛿 𝑛 − 𝑘, 𝑚 − 𝑙 ]

Finally, we know what happens when we pass a delta


function through a system right? We can use that here:
S[f[n,m]] = ∑? ∑ ?
CAB? @AB? 𝑓 𝑘, 𝑙 ℎ 𝑛 − 𝑘, 𝑚 − 𝑙

And this is how we get our flipped convolution function

Stanford University Lecture 5 - 10 10-Oct-17


Slide credit: Ulas Bagci

Stanford University Lecture 5 - 11 2-Feb-18


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 12 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 13 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 14 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 15 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 16 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 17 6-Oct-16


2D convolution example

Slide credit: Song Ho Ahn

Stanford University Lecture 5 - 18 6-Oct-16


Slide credit: Wolfram Alpha

Stanford University Lecture 5 - 19 6-Oct-16


(Cross) correlation (symbol: )
Cross correlation of two 2D signals f[n,m] and g[n,m]

? ?

𝑓 𝑛, 𝑚 ∗ ∗ ℎ 𝑛, 𝑚 = < < 𝑓 𝑘, 𝑙 ℎ 𝑘 − 𝑛, 𝑙 − 𝑚
CAB? @AB?

Stanford University Lecture 5 - 20 6-Oct-16


(Cross) correlation – example

Courtesy of J. Fessler
Stanford University Lecture 5 - 21 6-Oct-16
(Cross) correlation – example

Courtesy of J. Fessler
Stanford University Lecture 5 - 22 6-Oct-16
(Cross) correlation – example

Courtesy of J. Fessler
numpy’s
correlate
Stanford University Lecture 5 - 23 6-Oct-16
(Cross) correlation – example
Left Right

scanline

Norm. cross corr. score

Stanford University Lecture 5 - 24 6-Oct-16


Convolution vs. (Cross) Correlation

f*h f**h

Stanford University Lecture 5 - 25 6-Oct-16


Cross Correlation Application:
Vision system for TV remote
control
- uses template matching

Figure from “Computer Vision for Interactive Computer Graphics,” W.Freeman et al, IEEE Computer Graphics and Applications,
1998 copyright 1998, IEEE

Stanford University Lecture 5 - 26 6-Oct-16


properties
• Commutative property:

• Associative property:

• Distributive property:

The order doesn’t matter!


Stanford University Lecture 5 - 27 6-Oct-16
properties

• Shift property:

• Shift-invariance:

Stanford University Lecture 5 - 28 6-Oct-16


Convolution vs. (Cross) Correlation
• A convolution is an integral that expresses the amount
of overlap of one function as it is shifted over another
function.
– convolution is a filtering operation

• Correlation compares the similarity of two sets of


data. Correlation computes a measure of similarity of
two input signals as they are shifted by one another.
The correlation result reaches a maximum at the time
when the two signals match best .
– correlation is a measure of relatedness of two signals

Stanford University Lecture 5 - 29 6-Oct-16


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel edge detector
• Canny edge detector
• Hough Transform
Some background reading:
Forsyth and Ponce, Computer Vision, Chapter 8

Stanford University Lecture 5 - 30 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel edge detector
• Canny edge detector
• Hough Transform
Some background reading:
Forsyth and Ponce, Computer Vision, Chapter 8

Stanford University Lecture 5 - 31 10-Oct-17


Stanford University Lecture 5 - 32 10-Oct-17
(A) Cave painting at Chauvet, France, about 30,000 B.C.;
(B) Aerial photograph of the picture of a monkey as part of the Nazca Lines geoglyphs, Peru, about 700 – 200 B.C.;
(C) Shen Zhou (1427-1509 A.D.): Poet on a mountain top, ink on paper, China;
(D) Line drawing by 7-year old I. Lleras (2010 A.D.).

Stanford University Lecture 5 - 33 10-Oct-17


Hubel & Wiesel, 1960s

Stanford University Lecture 5 - 34 10-Oct-17


We know edges are special from
human (mammalian) vision studies

Hubel & Wiesel, 1960s

Stanford University Lecture 5 - 35 10-Oct-17


We know edges are special from
human (mammalian) vision studies

Stanford University Lecture 5 - 36 10-Oct-17


Walther, Chai, Caddigan, Beck & Fei-Fei, PNAS, 2011
Stanford University Lecture 5 - 37 10-Oct-17
Edge detection
• Goal: Identify sudden
changes (discontinuities) in
an image
– Intuitively, most semantic and
shape information from the
image can be encoded in the
edges
– More compact than pixels

• Ideal: artist’s line drawing


(but artist is also using
object-level knowledge) Source: D. Lowe

Stanford University Lecture 5 - 38 10-Oct-17


Why do we care about edges?
• Extract information,
recognize objects

• Recover geometry and Vertical vanishing


point
(at infinity)

viewpoint
Vanishing
line

Vanishing Vanishing
point point

Source: J. Hayes

Stanford University Lecture 5 - 39 10-Oct-17


Origins of edges

surface normal discontinuity

depth discontinuity

surface color discontinuity

illumination discontinuity

Source: D. Hoiem

Stanford University Lecture 5 - 40 10-Oct-17


Closeup of edges

Surface normal discontinuity

Source: D. Hoiem

Stanford University Lecture 5 - 41 10-Oct-17


Closeup of edges

Depth discontinuity

Source: D. Hoiem

Stanford University Lecture 5 - 42 10-Oct-17


Closeup of edges

Surface color discontinuity

Source: D. Hoiem

Stanford University Lecture 5 - 43 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel edge detector
• Canny edge detector
• Hough Transform

Stanford University Lecture 5 - 44 10-Oct-17


Derivatives in 1D

Stanford University Lecture 5 - 45 10-Oct-17


Derivatives in 1D - example

Stanford University Lecture 5 - 46 10-Oct-17


Derivatives in 1D - example

Slide credit: Dr Mubarak


Stanford University Lecture 5 - 47 10-Oct-17
Discrete Derivative in 1D

Stanford University Lecture 5 - 48 10-Oct-17


Types of Discrete derivative in 1D
Backward

Forward

Central

Stanford University Lecture 5 - 49 10-Oct-17


1D discrete derivate filters
• Backward filter: [0 1 -1]

• Forward: [-1 1 0]

• Central: [ 1 0 -1]

Stanford University Lecture 5 - 50 10-Oct-17


1D discrete derivate filters
• Backward filter: [0 1 -1]

Stanford University Lecture 5 - 51 10-Oct-17


1D discrete derivate filters
• Backward filter: [0 1 -1]

• Forward: [-1 1 0]

Stanford University Lecture 5 - 52 10-Oct-17


1D discrete derivate example

Stanford University Lecture 5 - 53 10-Oct-17


Discrete derivate in 2D

Stanford University Lecture 5 - 54 10-Oct-17


Discrete derivate in 2D

Stanford University Lecture 5 - 55 10-Oct-17


Discrete derivate in 2D

Stanford University Lecture 5 - 56 10-Oct-17


2D discrete derivative filters

What does this filter do?

Stanford University Lecture 5 - 57 10-Oct-17


2D discrete derivative filters

What about this filter?

Stanford University Lecture 5 - 58 10-Oct-17


2D discrete derivative - example

Stanford University Lecture 5 - 59 10-Oct-17


2D discrete derivative - example
What happens when we apply
this filter?

Stanford University Lecture 5 - 60 10-Oct-17


2D discrete derivative - example
What happens when we apply this filter?

Stanford University Lecture 5 - 61 10-Oct-17


2D discrete derivative - example
Now let’s try the other filter!

Stanford University Lecture 5 - 62 10-Oct-17


2D discrete derivative - example
What happens when we apply this filter?

Stanford University Lecture 5 - 63 10-Oct-17


3x3 image gradient filters

Stanford University Lecture 5 - 64 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel edge detector
• Canny edge detector
• Hough Transform

Stanford University Lecture 5 - 65 10-Oct-17


Characterizing edges
• An edge is a place of rapid change in the
image intensity function
intensity function
image (along horizontal scanline) first derivative

edges correspond to
extrema of derivative

Stanford University Lecture 5 - 66 10-Oct-17


Image gradient
• The gradient of an image:

The gradient vector points in the direction of most rapid increase in intensity

The gradient direction is given by

• how does this relate to the direction of the edge?

The edge strength is given by the gradient magnitude

Source: Steve Seitz

Stanford University Lecture 5 - 67 10-Oct-17


Finite differences: example

Original Gradient
Image magnitude

x-direction y-direction

• Which one is the gradient in the x-direction? How about y-direction?


Stanford University Lecture 5 - 68 10-Oct-17
Intensity profile

Intensity
Gradient

Source: D. Hoiem

Stanford University Lecture 5 - 69 10-Oct-17


Effects of noise
• Consider a single row or column of the image
– Plotting intensity as a function of position gives a signal

Where is the edge? Source: S. Seitz

Stanford University Lecture 5 - 70 10-Oct-17


Effects of noise

Stanford University Lecture 5 - 71 10-Oct-17


Effects of noise
• Finite difference filters respond strongly to
noise
– Image noise results in pixels that look very
different from their neighbors
– Generally, the larger the noise the stronger the
response
• What is to be done?
– Smoothing the image should help, by forcing
pixels different to their neighbors (=noise pixels?)
to look more like neighbors
Source: D. Forsyth

Stanford University Lecture 5 - 72 10-Oct-17


Effects of noise
• Finite difference filters respond strongly to
noise
– Image noise results in pixels that look very
different from their neighbors
– Generally, the larger the noise the stronger the
response
• What is to be done?
– Smoothing the image should help, by forcing
pixels different to their neighbors (=noise pixels?)
to look more like neighbors
Source: D. Forsyth

Stanford University Lecture 5 - 73 10-Oct-17


Smoothing with different filters
• Mean smoothing

• Gaussian (smoothing * derivative)

Slide credit: Steve Seitz


Stanford University Lecture 5 - 74 10-Oct-17
Smoothing with different filters

Slide credit: Steve Seitz


Stanford University Lecture 5 - 75 10-Oct-17
Solution: smooth first
f

f*g

d
( f * g)
dx

d
• To find edges, look for peaks in ( f * g)
dx Source: S. Seitz

Stanford University Lecture 5 - 76 10-Oct-17


Derivative theorem of convolution
• This theorem gives us a very useful property:
d d
( f * g) = f * g
dx dx
• This saves us one operation:

d
g
dx

d
f* g
dx Source: S. Seitz

Stanford University Lecture 5 - 77 10-Oct-17


Derivative of Gaussian filter

* [1 0 -1] =

2D-gaussian x - derivative

Stanford University Lecture 5 - 78 10-Oct-17


Derivative of Gaussian filter

x-direction y-direction

Stanford University Lecture 5 - 79 10-Oct-17


Derivative of Gaussian filter

Stanford University Lecture 5 - 80 10-Oct-17


Tradeoff between smoothing at different scales

1 pixel 3 pixels 7 pixels

• Smoothed derivative removes noise, but blurs


edge. Also finds edges at different “scales”.
Source: D. Forsyth

Stanford University Lecture 5 - 81 10-Oct-17


Designing an edge detector
• Criteria for an “optimal” edge detector:
– Good detection: the optimal detector must minimize the probability of
false positives (detecting spurious edges caused by noise), as well as that
of false negatives (missing real edges)

Stanford University Lecture 5 - 82 10-Oct-17


Designing an edge detector
• Criteria for an “optimal” edge detector:
– Good detection: the optimal detector must minimize the probability of
false positives (detecting spurious edges caused by noise), as well as that
of false negatives (missing real edges)
– Good localization: the edges detected must be as close as possible to
the true edges

Stanford University Lecture 5 - 83 10-Oct-17


Designing an edge detector
• Criteria for an “optimal” edge detector:
– Good detection: the optimal detector must minimize the probability of
false positives (detecting spurious edges caused by noise), as well as that
of false negatives (missing real edges)
– Good localization: the edges detected must be as close as possible to
the true edges
– Single response: the detector must return one point only for each true
edge point; that is, minimize the number of local maxima around the
true edge

Stanford University Lecture 5 - 84 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel Edge detector
• Canny edge detector
• Hough transform

Stanford University Lecture 5 - 85 10-Oct-17


Sobel Operator
• uses two 3×3 kernels which
are convolved with the original image to
calculate approximations of the derivatives
• one for horizontal changes, and one for
vertical

Stanford University Lecture 5 - 86 10-Oct-17


Sobel Operation
• Smoothing + differentiation

Gaussian smoothing differentiation

Stanford University Lecture 5 - 87 10-Oct-17


Sobel Operation
• Magnitude:

• Angle or direction of the gradient:

Stanford University Lecture 5 - 88 10-Oct-17


Sobel Filter example

Stanford University Lecture 5 - 89 10-Oct-17


Sobel Filter Problems

• Poor Localization (Trigger response in multiple adjacent pixels)


• Thresholding value favors certain directions over others
– Can miss oblique edges more than horizontal or vertical edges
– False negatives

Stanford University Lecture 5 - 90 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel Edge detector
• Canny edge detector
• Hough Transform

Stanford University Lecture 5 - 91 10-Oct-17


Canny edge detector
• This is probably the most widely used edge
detector in computer vision
• Theoretical model: step-edges corrupted by
additive Gaussian noise
• Canny has shown that the first derivative of
the Gaussian closely approximates the
operator that optimizes the product of
signal-to-noise ratio and localization

J. Canny, A Computational Approach To Edge Detection, IEEE Trans. Pattern


Analysis and Machine Intelligence, 8:679-714, 1986.

Stanford University Lecture 5 - 92 10-Oct-17


Canny edge detector
• Suppress Noise
• Compute gradient magnitude and direction
• Apply Non-Maximum Suppression
– Assures minimal response
• Use hysteresis and connectivity analysis to
detect edges

Stanford University Lecture 5 - 93 10-Oct-17


Example

• original image
Stanford University Lecture 5 - 94 10-Oct-17
Derivative of Gaussian filter

x-direction y-direction

Stanford University Lecture 5 - 95 10-Oct-17


Compute gradients (DoG)

X-Derivative of Gaussian Y-Derivative of Gaussian Gradient Magnitude

Source: J. Hayes

Stanford University Lecture 5 - 96 10-Oct-17


Get orientation at each pixel

Source: J. Hayes

Stanford University Lecture 5 - 97 10-Oct-17


Compute gradients (DoG)

X-Derivative of Gaussian Y-Derivative of Gaussian Gradient Magnitude

Stanford University Lecture 5 - 98 10-Oct-17


Canny edge detector
• Suppress Noise
• Compute gradient magnitude and direction
• Apply Non-Maximum Suppression
– Assures minimal response

Stanford University Lecture 5 - 99 10-Oct-17


Non-maximum suppression
• Edge occurs where gradient reaches a maxima
• Suppress non-maxima gradient even if it
passes threshold
• Only eight angle directions possible
– Suppress all pixels in each direction which are not
maxima
– Do this in each marked pixel neighborhood

Stanford University Lecture 5 - 100 10-Oct-17


Remove spurious gradients
𝛻𝐺 𝑥, 𝑦 is the gradient at pixel (x, y)

𝛻𝐺 𝑥, 𝑦 𝛻𝐺 𝑥, 𝑦 > 𝛻𝐺 𝑥′, 𝑦′
𝛻𝐺 𝑥, 𝑦 > 𝛻𝐺 𝑥′′, 𝑦′′

Stanford University Lecture 5 - 101 10-Oct-17


Non-maximum suppression
• Edge occurs where gradient reaches a maxima
• Suppress non-maxima gradient even if it
passes threshold
• Only eight angle directions possible
– Suppress all pixels in each direction which are not
maxima
– Do this in each marked pixel neighborhood

Stanford University Lecture 5 - 102 10-Oct-17


Non-maximum suppression
At q, we have a
maximum if the
value is larger than
those at both p
and at r.
Interpolate to get
these values.

Source: D. Forsyth

Stanford University Lecture 5 - 103 10-Oct-17


Non-max Suppression

Before After
Stanford University Lecture 5 - 104 10-Oct-17
Canny edge detector
• Suppress Noise
• Compute gradient magnitude and direction
• Apply Non-Maximum Suppression
– Assures minimal response
• Use hysteresis and connectivity analysis to
detect edges

Stanford University Lecture 5 - 105 10-Oct-17


Stanford University Lecture 5 - 106 10-Oct-17
Hysteresis thresholding
• Avoid streaking near threshold value
• Define two thresholds: Low and High
– If less than Low, not an edge
– If greater than High, strong edge
– If between Low and High, weak edge

Stanford University Lecture 5 - 107 10-Oct-17


Hysteresis thresholding
If the gradient at a pixel is
• above High, declare it as an ‘strong edge pixel’
• below Low, declare it as a “non-edge-pixel”
• between Low and High
– Consider its neighbors iteratively then declare it
an “edge pixel” if it is connected to an ‘strong
edge pixel’ directly or via pixels between Low and
High

Stanford University Lecture 5 - 108 10-Oct-17


Hysteresis thresholding

strong edge pixel weak but connected


edge pixels

strong edge pixel

Source: S. Seitz

Stanford University Lecture 5 - 109 10-Oct-17


Final Canny Edges

Stanford University Lecture 5 - 110 10-Oct-17


Canny edge detector
1. Filter image with x, y derivatives of Gaussian
2. Find magnitude and orientation of gradient
3. Non-maximum suppression:
– Thin multi-pixel wide “ridges” down to single pixel width
4. Thresholding and linking (hysteresis):
– Define two thresholds: low and high
– Use the high threshold to start edge curves and the low
threshold to continue them

Stanford University Lecture 5 - 111 10-Oct-17


Effect of s (Gaussian kernel spread/size)

original Canny with Canny with

The choice of s depends on desired behavior


• large s detects large scale edges
• small s detects fine features
Source: S. Seitz

Stanford University Lecture 5 - 112 10-Oct-17


Gradients
(e.g. Canny)

Color

Texture

Combined

Human

Stanford University Lecture 5 - 113 10-Oct-17


45 years of boundary detection

Source: Arbelaez, Maire, Fowlkes, and Malik. TPAMI 2011 (pdf)

Stanford University Lecture 5 - 114 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel Edge detector
• Canny edge detector
• Hough Transform

Stanford University Lecture 5 - 115 10-Oct-17


Intro to Hough transform
• The Hough transform (HT) can be used to detect
lines.
• It was introduced in 1962 (Hough 1962) and first
used to find lines in images a decade later (Duda
1972).
• The goal is to find the location of lines in images.
• Caveat: Hough transform can detect lines, circles
and other structures ONLY if their parametric
equation is known.
• It can give robust detection under noise and
partial occlusion
Stanford University Lecture 5 - 116 10-Oct-17
Prior to Hough transform
• Assume that we have performed some edge detection,
and a thresholding of the edge magnitude image.
• Thus, we have some pixels that may partially describe
the boundary of some objects.

Stanford University Lecture 5 - 117 10-Oct-17


Detecting lines using Hough transform
• We wish to find sets of pixels that make up
straight lines.
• Consider a point of known coordinates (xi;yi)
– There are many lines passing through the point
(xi ,yi ).
• Straight lines that pass that point have the
form yi= a*xi + b
– Common to them is that they satisfy the
equation for some set of parameters (a, b)

Stanford University Lecture 5 - 118 10-Oct-17


Detecting lines using Hough transform
• This equation can obviously be rewritten as
follows:
– b = -a*xi + yi
– We can now consider x and y as parameters
– a and b as variables.
• This is a line in (a, b) space parameterized by x
and y.
– So: a single point in x1,y1-space gives a line in (a,b)
space.
– Another point (x2, y2 ) will give rise to another line
(a,b) space.

Stanford University Lecture 5 - 119 10-Oct-17


Detecting lines using Hough transform

Stanford University Lecture 5 - 120 10-Oct-17


Detecting lines using Hough transform

Stanford University Lecture 5 - 121 10-Oct-17


Detecting lines using Hough transform
• Two points (x1, y1) and(x2 y2) define a line in
the (x, y) plane.
• These two points give rise to two different
lines in (a,b) space.
• In (a,b) space these lines will intersect in a
point (a’ b’)
• All points on the line defined by (x1, y1) and
(x2 , y2) in (x, y) space will parameterize lines
that intersect in (a’, b’) in (a,b) space.

Stanford University Lecture 5 - 122 10-Oct-17


Algorithm for Hough transform
• Quantize the parameter space (a b) by dividing it into
cells
• This quantized space is often referred to as the
accumulator cells.
• Count the number of times a line intersects a given
cell.
– For each pair of points (x1, y1) and (x2, y2) detected as an
edge, find the intersection (a’,b’) in (a, b)space.
– Increase the value of a cell in the range
[[amin, amax],[bmin,bmax]] that (a’, b’) belongs to.
– Cells receiving more than a certain number of counts (also
called ‘votes’) are assumed to correspond to lines in (x,y)
space.

Stanford University Lecture 5 - 123 10-Oct-17


Output of Hough transform
• Here are the top 20 most voted lines in the
image:

Stanford University Lecture 5 - 124 10-Oct-17


Other Hough transformations
• We can represent lines as polar coordinates
instead of y = a*x + b
• Polar coordinate representation:
– x*cosθ + y*sinθ = ρ
• Can you figure out the relationship between
– (x y) and (ρ θ)?

Stanford University Lecture 5 - 125 10-Oct-17


Other Hough transformations
• Note that lines in (x y)
space are not lines in
(ρ θ) space, unlike (a b)
space.
• A vertical line will have
θ=0 and ρ equal to the
intercept with the x-
axis.
• A horizontal line will
have θ=90 and ρ equal
to the intercept with
the y-axis.

Stanford University Lecture 5 - 126 10-Oct-17


Example video
• https://youtu.be/4zHbI-fFIlI?t=3m35s

Stanford University Lecture 5 - 127 10-Oct-17


Concluding remarks
• Advantages:
– Conceptually simple.
– Easy implementation
– Handles missing and occluded data very gracefully.
– Can be adapted to many types of forms, not just lines
• Disadvantages:
– Computationally complex for objects with many parameters.
– Looks for only one single type of object
– Can be “fooled” by “apparent lines”.
– The length and the position of a line segment cannot be
determined.
– Co-linear line segments cannot be separated.

Stanford University Lecture 5 - 128 10-Oct-17


What we will learn today
• Edge detection
• Image Gradients
• A simple edge detector
• Sobel Edge detector
• Canny edge detector
• Hough Transform

Stanford University Lecture 5 - 129 10-Oct-17

You might also like