Digital Image Fundamentals: Dr. Akriti Nigam Assistant Professor BIT-Mesra
Digital Image Fundamentals: Dr. Akriti Nigam Assistant Professor BIT-Mesra
Chapter 2
7/28/2019 1
Elements of Visual Perception
• The aim of the Image Processing and Analysis
techniques are to build a system that has
similar capabilities as the human vision
system.
• To achieve it, one must know about the
formation of image in the eye, brightness
adaptation and discrimination, and perceptual
mechanism, and should incorporate such
knowledge in the processing algorithms.
7/28/2019 2
Structure of Human Eye
• Spherical in shape with average diameter
approximately 20 mm
• Three membranes enclose the eye
Cornea and sclera as outermost cover
Choroid as middle layer
Retina as innermost layer
• Iris (pupil) varies in diameter approximately 2 to 8
mm
• Lens
7/28/2019 3
7/28/2019 4
Light Receptors over Retina
• 2 photosensitive cell types in retina
– RODS: sensitive to brightness/luminance
– CONES: sensitive to color/ frequency
• Cellular distribution not uniform
– cones predominate at the fovea
– rods dominate at periphery of vision
Color vision best at the fovea, poor in
peripheral vision
Luminance detection poorer at fovea
7/28/2019 5
• Cone vision is called photopic or bright- light
vision.
• Rods are not involved in color vision and are
sensitive to low levels of illumination. For e.g.
Objects that appear brightly colored in day-
light when seen by moonlight appears as
colorless forms because only the rods are
stimulated. This phenomenon is known as
scotopic or dim- light vision.
7/28/2019 6
7/28/2019 7
7/28/2019 8
Relationship between Intensity and
Brightness
• Intensity and brightness are two different
phenomenon.
• Intensity of a light source depends on the total
amount of light emitted.
• Thus, intensity is a physical property and can
be measured.
7/28/2019 9
• On the other hand, brightness is a psycho-
visual concept and can be described as the
sensation to the light intensity.
• The subjective brightness perceived by human
visual system is a logarithmic function of
intensity incident on the eye.
7/28/2019 10
Intensity Discrimination
Consider a sharp-edged I1
on uniform brightness I.
7/28/2019 11
7/28/2019 12
• The ratio I/I, known as ‘Weber Ratio’, has
approx. a constant value 0.02 over a wide
range of brightness I.
• Small value of I/I “good” brightness
discrimination
• Large value of I/I “poor” brightness
discrimination
7/28/2019 13
Effect on Human Perception
• It has been shown that perceived brightness is
not a simple function of intensity.
• Example of simultaneous contrast:
All the inner squares have the same intensity,
but they appear progressively darker as the
background becomes lighter.
7/28/2019 14
Contd…
• It appear to the eye to example
become darker as the
background gets lighter.
A more familiar example
is a piece of paper that
seems white when lying
on a desk, but can
appear totally black
when used to shield the
eyes while looking
directly at a bright sky.
7/28/2019 15
Other Characteristics of Human
Visual System
• Phenomenon of
Optical Illusion:
One important
characteristic of HVS is
it seems to fill in the
interior of a region
with apparent
brightness at the
edge.
7/28/2019 16
Imaging Sensors
7/28/2019 17
Image Sensing and Acquisition
7/28/2019 18
Image Sensing and Acquisition
or even a computer-generated illumination
pattern.
• Scene elements: familiar objects or
molecules or buried rock formations, or a
human brain.
7/28/2019 19
Imaging Sensors
7/28/2019 20
7/28/2019 21
Image Acquisition using Single Sensor
7/28/2019 22
Image Acquisition using Sensor Strip
7/28/2019 23
7/28/2019 24
Process of CAT Imaging
7/28/2019 25
Image Acquisition using Sensor Arrays
7/28/2019 26
• The first function performed by the imaging system
is to collect the incoming reflected energy and focus
it onto an image plane.
• If the illumination is light, the front end of the
imaging system is a lens, which projects the viewed
scene onto the lens focal plane.
• The sensor array, which is coincident with the focal
plane, produces outputs proportional to the
integral of the light received at each sensor.
7/28/2019 27
What is Image Digitization?
• An image g(x,y) that is detected and recorded by
a sensor is primarily a continuous tone intensity
pattern formed on 2-D plane.
• The image must be converted into a form which
is suitable for computer processing.
• The method of converting an image, which is
continuous in space as well as in its value, into a
discrete numerical form is called image
digitization.
7/28/2019 28
Steps involved in Image Digitization
7/28/2019 29
Figure: Generating a Digital Image (a) Continuous image. (b) a scan line from A to B in the continuous image,
used to illustrate the concept of sampling and quantization. (c) sampling and quantization. (d) Digital scan line
(b)
Gray level scale is divided
into 8 discrete levels
(a)
Samples shown as
white squares
7/28/2019 30
(c) Sampling (d)
Controlling Image Sampling
• The method of sampling is determined by the
sensor arrangement used to generate the
image.
• In case of Image generation by single sensor
element, ‘sampling’ is accomplished by
selecting the number of mechanical
increments at which we activate the sensor to
collect data.
7/28/2019 31
• Case of Sensor Strip: the number of sensors
in the strip establishes the sampling
limitations in one image direction.
• Mechanical motion is the other direction is
controlled accordingly.
• Case of Sensor Array: number of sensors in
the array establishes the limit of sampling
in both directions.
7/28/2019 32
Image Quantization
• The continuous
gray levels are
quantized simply
by assigning one
of the eight
discrete gray
levels to each
sample.
7/28/2019 33
Spatial Resolution
• Basically, the spatial resolution is the smallest
discernible detail in an image.
• ‘Sampling’ is the principal factor determining
the spatial resolution of an image.
• Explanation: Consider a chart with vertical
lines of width W, with the space between the
lines also having width W.
7/28/2019 34
• A line pair consists of one such line and its
adjacent space. Thus the width of a line pair is
2W.
there are 1/2W line pairs per unit dist.
• We may define ‘spatial resolution’ as the
smallest number of discernible line pairs per
unit distance (e.g. 100 lines per mm).
7/28/2019 35
7/28/2019 36
Image Size
7/28/2019 37
The greater the number of pixels in an image, the denser the picture
information and therefore the higher the resolution. Higher resolution provides
more detail within your image and allows for larger printouts with smooth,
continuous tone and color accuracy.
7/28/2019 38
Number of storage bits for various values of N and k
7/28/2019 39
Minimum megapixels for quality prints
7/28/2019 40
Effects of varying the number of
samples on image quality
7/28/2019 41
Fig: A 1024*1024, 8-bit image subsampled down to size 32*32
pixels. The number of allowable gray levels was kept at 256.
7/28/2019 44
Effects of varying number of gray levels on image quality: Concept of false contouring
(a) 452*374, 256-level image (b)–(d) Image (e)–(h) Image displayed in 16, 8, 4 and 2 gray
displayed in 128, 64, and 32 gray levels, levels, while keeping the spatial resolution
constant.
7/28/2019 45
Operations on Digital Images:
Zooming and Shrinking
7/28/2019 46
• Creation of new pixel locations
– Assume an imaginary grid of size 750*750
pixels laying over the original image.
• Assigning gray-value to new locations
– Nearest neighbor interpolation
– Bilinear interpolation
– Cubic interpolation
7/28/2019 47
7/28/2019
• Problem statement:
• We are given the values of a function f at a
few locations, e.g., f(1), f(2), f(3), …
• Want to find the rest of the values
– What is f(1.5)?
7/28/2019 49
Interpolation
• Example:
f(1) = 1, f(2) = 10, f(3) = 5 , f(4) = 16, f(5) = 20
50
Interpolation
• How can we find f(1.5)?
• One approach: take the average of f(1) and f(2)
f (1.5) = 5.5
51
Linear interpolation (lerp)
• Fit a line between each pair of data points
52
Linear interpolation
• To compute f(x), find the two points xleft and
xright that x lies between
f (xright)
xleft x xright
53
Bilinear interpolation
• What about in 2D?
– Interpolate in x, then in y
• Example
– We know the red values
– Linear interpolation in x
between red values gives us
http://en.wikipedia.org/wiki
the blue values /Bilinear_interpolation
– Linear interpolation in y
between the blue values
54 gives us the answer
Bilinear interpolation
http://en.wikipedia.org/wiki
/Bilinear_interpolation
55
Nearest neighbor interpolation
56
Bilinear interpolation
57
Beyond linear interpolation
• Fits a more complicated model to the pixels in
a neighborhood
• E.g., a cubic function
http://en.wikipedia.org/wiki/Bicubic_interpolation
58
Polynomial interpolation
• Given n points to fit, we can find a polynomial
p(x) of degree n – 1 that passes through every
point exactly
59
p(x) = -2.208 x4 + 27.08x3 - 114.30 x2 + 195.42x - 104
Top row: images zoomed from 128*128, 64*64, and 32*32 pixels to 1024*1024 pixels,
using nearest neighbor gray-level interpolation. Bottom row: using Bilinear interpolation.
7/28/2019 60
Basic Relationship between pixels
• Neighbors of a pixel
• Adjacency, Connectivity, Regions and
Boundaries
• Distance measures
7/28/2019 61
Neighbors of a pixel
.
.
.
.
.
.
• 4-neighbors - N4(p)
(x+1, y), (x-1, y), (x, y+1), (x, y-1)
7/28/2019 62
Neighbors of a pixel
• Diagonal-neighbors - ND(p)
(x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
• 8-neighbors – N8(p)
N8(p) = N4(p) ND(p)
7/28/2019 63
Adjacency of Pixels
• Let V be the set of gray-level values used to
define adjacency.
• 4-adjacency:
– Two pixels p and q with values from V are 4-
adjacent if q is in the set N4(p).
• 8-adjacency:
– Two pixels p and q with values from V are 8-
adjacent if q is in the set N8(p).
7/28/2019 64
Adjacency of Pixels
• m-adjacency:
– Two pixels p and q with values from V are m-
adjacent if
i. q is in N4(p) or
ii. q is in ND(p) and the set N4(p) ∩ N4(q) has no pixels
whose values are from V.
• Example:
7/28/2019 65
Connectivity
7/28/2019 67
Region of an Image
• Let R be a subset of pixels in an image.
• We call R a region of the image if R is a
connected set.
7/28/2019 68
Boundary of a Region
• The boundary of (border or contour) of a
region R is the set of pixels in the region that
have one or more neighbors that are not in R.
• The boundary of a finite region forms a closed
path.
7/28/2019 69
Distance Measures
• For pixels p, q and z with coordinates (x,y), (s,t)
and (u,v) respectively, D is a distance function or
metric if
– D(p, q) ≥ 0 (D(p, q) =0 iff p=q)
– D(p, q) = D(q, p) and
– D(p, z) ≤ D(p, q) + D(q, z)
• Different Measures
– Euclidean distance
– City Block distance
– Chessboard distance
– M-adjacency Dm distance
7/28/2019 70
Euclidean distance (De)
De(p,q) = [(x-s)2 + (y-t)2]1/2
7/28/2019 71
City Block distance (D4)
7/28/2019 74
Arithmetic/Logic Operations
7/28/2019 76
Neighborhood-oriented (or mask)
operations
• The idea behind mask operations is to let the
value assigned to a pixel be a function of it
and its neighbors.
• Useful in applications such as:
– Noise reduction
– Region thinning
– Feature detection
7/28/2019 77
Contd…
.
.
.
a b c
w1 w2 w3
... d e f ...
w4 w5 w6
g h i
w7 w8 w9
.
. A 3*3 mask
.
Sub-area of an image
7/28/2019 79