Digital Image Processing Fundamental (Chapter-02)
Digital Image Processing Fundamental (Chapter-02)
No Question Year
(i) Pixel
a. Neighbor of Pixel
b. 4- neighbor
c. Diagonal Neighbor
d. 8 Neighbor
(ii) Adjacency
a. 4- adjacency
b. 8 – adjacency
c. M-adjacency
(iii) Path
(iv) Closed path
(v) Region
(vi) Boundary
(vii) Connectivity
(viii) Binary Image
(ix) Distance Function
(x) Euclidean Distance
(xi) City0block Distance/ D4 distance
(xii) Chessboard Distance / D8 distance
(xiii) Mask
(xiv) Werber ration
2 Define image sampling and quantization. 2020,2018,201
5,2011
20111
7 What are differences between photopic and scotopic vision? 2019,2018,201
4,2011
9 You might have notice that emergency vehicles such as ambulances 2013
are often labeled on the front hood with reversed lettering (e.g
ECNALUBMA ). Explain why this is so.
10 2020
An image segment is shown:
P(0,0)
2 3 2 6 1
6 2 3 6 2
5 3 2 3 5
2 4 3 5 2
4 5 2 3 6
q (4,4)
SI S2
0 0 0 0 0 0 0 1 1 0
1 0 0 1 0 0 1 0 0 1
1 0 0 1 0 1 1 0 0 0
0 0 1 1 1 0 0 0 0 0
0 0 1 1 1 0 0 1 1
1
a) 4-adjacent
b) 8-adjacent and
c) m-adjacent
12 Calculate (i) Euclidean 2019
13 Show the M-adjacent path from top-right to the bottom left pixel of 2019
the following image using the set of similar pixels.
[2019]
V= {125, 126, 127, 128}
P(0,0)
2 3 2 6 1
6 2 3 6 2
5 3 2 3 5
2 4 3 5 2
4 5 2 3 6
q(4,4)
3 1 2 1(q)
2 2 0 2
1 2 1 1
1(p) 0 1 1
(i) Pixel
Origin
0 1 N-1
y
0
One pixel
M-1
a) 4- neighbor:
y
(x,y+1)
(x-1,y) P (x+1,y)
(x,y)
(x,y-1)
A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose
coordinates are given by (x+1,y), (x-1,y), (x,y+1), (x,y-1)
Each pixel is a unit distance from (x,y), and some of the neighbors of P lie outside
the digital image, if (x,y) is on the border of the image.
b) Diagonal Neighbor:
The 4-diagonal neighbors of P have coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1),
(x-1, y-1) and are denoted by ND(P)
(x-1,y+1) (x+1,y+1)
P
(x,y)
(x-1,y-1) (x+1,y-1)
c) 8 Neighbor:
d)
P
(x,y)
(iii) Adjacency:
Two pixels are adjacent if they are neighbors and their intensity level ‘V’
satisfy some specific criteria of similarity.
e.g. V = {1}
V = { 0, 2}
Binary image = { 0, 1}
In binary images, 2 pixels are adjacent if they are neighbors & have some intensity
values either 0 or 1.
In gray scale, image contains more gray level values in range 0 to 255.
1) 4-adjacency
2) 8-adjacency
3) m-adjacency
a) 4- adjacency:
Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
A B C D
E Q P F
G H I J
K L M N
N4(P) = {C, I, F, Q}
b) 8 – adjacency:
Two pixels p and q with values from V are 8-adjacent if q is in the set N8 (P).
A B C D
E Q P F
G H I J
K L M N
Let V = {255, 20p, 25p, 125, 235, 240, 130, 140, 122, 250, 210, 205}
N8(P) = {B, C, D, Q, F, H, I, J}
c) M-adjacency :
Two pixels p and q with values from v are m-adjacent if
i) q is in N4(p)
or, ii) q is in ND (p) and the set N4(p) n N4 (q) has no Mixed adjacency is a
modification of 8-adjacency.
(iv) Path
A Path from pixel p with coordinates (x,y) to pixed q with coordinates (s,t) is a
sequence of distinct pixels with coordinates(x0, y0), (x1,y1),............, (xn, yn)
.Where (x0, y0) = (x,y), (xn, yn) = (s,t), and
pixels (xi, yi) and (xi-1, yi-1) are adjacent for 1 i n.
q(s,t)
p(x,y)
and if (x0, y0) = (xn, yn), i.e., the start and end is same, called closed path.
(vi) Region
Her, The pixels neighbors and their gray level are equal
(vii) Boundary
i.e., The intensity level of the entire image is different from the intensity level of
the outside image.
(viii) Connectivity
Two pixels are said to be connected if they are adjacent and must have following
criteria
For pixels p, q, and z, with coordinates (x,y), (s,t), and (v,w), respectively, D is a
Distance Function or metric if
In this approach, the pixels having a distance less than or equal to some value r
from (x,y) are the points contained in a disk of radius r centered at (x,y).
In this case, the pixels having a D4 distance 2 from (x,y) (the center point) form
the following contours of constant distance.
2
2 1 2
2 1 0 1 2
2 1 2
2
In this case, the pixels with D8 distance from (x,y) less than or equal to some
value r from a square centered at (x,y) e.g., The pixels with D8 distance 2 from
(x,y) (the center point) form the following contours of constant distance.
2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2
The pixels with D8 = 1 are the 8-neighbors of (x,y)
(xiv) Mask
-1 0 1
-1 0 1
-1 0 1
The quantity
ΔI/Ic, where Ic is the increment of illumination discriminable 50% of the time with
background illumination I, is called the Weber ratio. A small value of
I I c means that a small percentage change in intensity is discriminable. This
represents “good” brightness discrimination.
2. Define image sampling and quantization. (2020,2018,2015,2011)
Answer:
Sampling
Image sampling is the process of converting a continuous image into a discrete image by selecting a grid
of pixels at regular intervals. This determines the spatial resolution of the digital image. Digitizing the co-
ordinate value is called sampling.
Quantization
Image quantization is the process of mapping the sampled image's pixel values to a limited number of
discrete intensity levels. This reduces the number of distinct colors or gray levels in the image, determining
Sampling
Since an analogue image is continuous not just in its co-ordinates (x axis), but also in its amplitude (y axis),
so the part that deals with the digitizing of co-ordinates is known as sampling. In digitizing sampling is
done on independent variable. In case of equation y = sin(x), it is done on x variable.
When looking at this image, we can see there are some random variations in the signal caused by noise. In
sampling we reduce this noise by taking samples. It is obvious that more samples we take, the quality of
the image would be more better, the noise would be more removed and same happens vice versa. However,
if you take sampling on the x axis, the signal is not converted to digital format, unless you take sampling
of the y-axis too which is known as quantization.
Sampling has a relationship with image pixels. The total number of pixels in an image can be calculated as
Pixels = total no of rows * total no of columns. For example, let’s say we have total of 36 pixels, that means
we have a square image of 6X 6. As we know in sampling, that more samples eventually result in more
pixels. So it means that of our continuous signal, we have taken 36 samples on x axis. That refers to 36
pixels of this image. Also the number sample is directly equal to the number of sensors on CCD array.
Here is an example for image sampling and how it can be represented using a graph.
Quantization
Quantization is opposite to sampling because it is done on “y axis” while sampling is done on “x axis”.
Quantization is a process of transforming a real valued sampled image to one taking only a finite number
of distinct values. Under quantization process the amplitude values of the image are digitized. In simple
words, when you are quantizing an image, you are actually dividing a signal into quanta(partitions).
Now let’s see how quantization is done. Here we assign levels to the values generated by sampling process.
In the image showed in sampling explanation, although the samples has been taken, but they were still
spanning vertically to a continuous range of gray level values. In the image shown below, these vertically
ranging values have been quantized into 5 different levels or partitions. Ranging from 0 black to 4 white.
This level could vary according to the type of image you want.
There is a relationship between Quantization with gray level resolution. The above quantized image
represents 5 different levels of gray and that means the image formed from this signal, would only have 5
different colors. It would be a black and white image more or less with some colors of gray.
When we want to improve the quality of image, we can increase the levels assign to the sampled image. If
we increase this level to 256, it means we have a gray scale image. Whatever the level which we assign is
called as the gray level. Most digital IP devices uses quantization into k equal intervals. If b-bits per pixel
are used,
The number of quantization levels should be high enough for human perception of fine shading details in
the image. The occurrence of false contours is the main problem in image which has been quantized with
insufficient brightness levels. Here is an example for image quantization process.
Answer:
Sampling Quantization
Sampling is done prior to the quantization Quantization is done after the sampling
process. process.
It determines the spatial resolution of the It determines the number of grey levels in the
digitized images. digitized images.
It reduces c.c. to a series of tent poles over It reduces c.c. to a continuous series of stair
a time. steps.
A single amplitude value is selected from Values representing the time intervals are
different values of the time interval to rounded off to create a defined set of possible
represent it. amplitude values.
4. Explain the process of image formation in the eye.
Or, Describe the principle of image formation in human eye. (2018,2016,2015,2013, 2011,2010)
Answer:
posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body.
❑ To focus on distant objects, the controlling muscles cause the lens to be relatively flattened.
❑ Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye.
❑ The distance between the center of the lens and the retina (called focal length) varies from approximately
17mm to about 14mm, as the refractive power of the lens increases from its minimum to its maximum.
❑ When the eye focuses on an object farther away than about 3m, the lens exhibits its lowest refractive
power.
When the eye focuses on a nearby object, the lens is most strongly refractive.
This information makes it easy to calculate the size of the retinal image of any object.
If h is the height in mm of that object in the retinal image, the geometry of the above fig yields
15 h
=
100 17
h = 2.55mm
The retinal image is reflected primarily in the area of the fovea, and transformed the radiant energy into
electrical impulses that are ultimately decoded by the brain.
Answer:
Origin
0 1 2 3 ....................................... N-1
y
0
Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and N columns.
The values of the coordinates (x, y) now become discrete quantities. We use integer values for these
discrete coordinates.
Thus, the values of the coordinates at the origin are (x, y) = (0, 0). The next coordinate values along the
first row of the image are represented as (x, y) = (0, 1).
The complete MN digital image in the following compact matrix form:
f (0,0) f (0,1) ... f (0, N − 1)
f (1,0) f (1,1) ... f (1, N − 1)
f ( x , y) =
... ... ... ...
f (M − 1,0) f (M − 1,1) ... f (M − 1, N − 1)
The right side of this eq (1) is by definition a digital image. Each element of this matrix array is called an
image element, picture element, pixel, or pel.
❑ A more traditional matrix notation to denote a digital image and its elements:
a 0, 0 a 0,1 ... a 0, N −1
a 1,0 a 1,1 ... a 1, N −1
A=
... ... ... ...
a M −1,0 a M −1,1 ... a M −1, N −1
clearly, aij = f(x=i, y=j) = f(i,j)
So, eq (1) and eq(2) are identical matrices.
Due to processing, storage, and sampling hardware considerations, the number of gray levels typically an
integer power of 2.
L = 2k .......................(3)
The discrete levels are equally spaced and interval is [0, L-1]
To store a digital image, the number of bits,
b = M NK................. (4)
When M = N
b = N2k
An image can have 2k gray levels, refer to as ‘k-bit image’.
Aliasing:
Aliasing is an effect that causes different signals to become indistinguishable from each other during
sampling. Aliasing is characterized by the altering of output compared to the original signal because
resampling or interpolation resulted in a lower resolution in images, a slower frame rate in terms of video
or a lower wave resolution in audio. Anti-aliasing filters can be used to correct this problem.
Answer:
Light Conditions Bright light (daylight or well-lit Low light (nighttime or dim
environments) environments)
Primary Photoreceptors Cones Rods
Sensitivity to Light Less sensitive (requires more Highly sensitive (requires less
light) light)
Color Perception Full color vision (red, green, No color perception (black and
blue) white)
Number of Receptors Fewer cones (approximately 6-7 More rods (approximately 120
million) million)
9. You might have notice that emergency vehicles such as ambulances are often labeled on the front
hood with reversed lettering (e.g ECNALUBMA ). Explain why this is so. (2013)
Or, Why AMBULANCE is written as ECNALUBMA on emergency vehicles?
Answer:
While the original purpose of the reversed 'AMBULANCE' was better readability, advances in lighting,
safety design, and understanding of visual cognition have rendered this lettering of secondary importance.
Moreover, The word "AMBULANCE" is written as "ECNALUBMA" on the front of emergency vehicles
to ensure it is easily readable by drivers in their rearview mirrors. Here’s a detailed explanation:
1. Reflection in Mirrors:
- When text is viewed through a mirror, it appears reversed. This means that if "AMBULANCE" were
written normally on the front of the vehicle, it would appear as "ECNALUBMA" in the rearview mirror,
making it difficult to read quickly.
2. Reversed Writing:
- By writing "AMBULANCE" as "ECNALUBMA," the text appears correctly oriented when viewed in
a rearview mirror. This ensures that drivers can immediately recognize the word "AMBULANCE" and
understand that an emergency vehicle is approaching, prompting them to move aside.
This practice enhances safety and response time by ensuring that drivers can quickly and easily recognize
an ambulance approaching from behind, allowing them to yield the right of way more efficiently.
P(0,0)
2 3 2 6 1
6 2 3 6 2
5 3 2 3 5
2 4 3 5 2
4 5 2 3 6
q (4,4)
Let V be the set of gray level value used to define connectivity in the image compute D 4, D8 and Dm
distances between pixels ‘P’ and ‘q’ for (i) V= [2,3] (ii) [2,6]
Answer:
D4 (City block):
Let P(x,y) and q(s,t)
We know that,
D4= |𝑥 − 𝑠| + |𝑦 − 𝑡|
= |0 − 4| + |0 − 4| = 4+4=8
D8 (cross-board):
We know that
D8= max [|𝑥 − 𝑠| + |𝑦 − 𝑡|]
= max (4,4)=4
Dm Distance:
For V= [2,3] and V= [2,6]
Dm path doesn’t exist.
11. Consider the two image subsets S1 and S2 shown in the following figure:
SI S2
0 0
0 0 0 0 0 0 1 1
1 1
0 0 1 0 0 1 0 0
1 0
0 0 1 0 1 1 0 0
0 0
0 1 1 1 0 0 0 0
0 1
0 1 1 1 0 0 1
1
a) 4-adjacent
b) 8-adjacent and m-adjacent
Answer:
Let p and q be as shown in Figure. Then,
(i) S1 and S2 are not 4-conntected because q is not in the set N4(p).
(ii) S1 and S2 are 8-connected because q is in the set N8(p).
(iii) S1 and S2 are m-connected because:
a) Q is in ND(p) and
b) The set N4(p)\N4(q) is empty.
(i) Euclidean
(ii) city block
(iii) Chess board distance -between spatial location P(5,5) and Q(20,20)
Answer:
13. Show the M-adjacent path from top-right to the bottom left pixel of the following image using the
set of similar pixels. (2019)
V= {𝟏𝟐𝟓, 𝟏𝟐𝟔, 𝟏𝟐𝟕, 𝟏𝟐𝟖}
124 124 127 125
Answer:
124 124 127 125
3 1 2 1(q)
2 2 0 2
1 2 1 1
1(p) 0 1 1
D4(p,q)=|x-s|+|y-t|
=|0-3|+|0-3|
=6 unit
D8 = max [|𝑥 − 𝑠| + |𝑦 − 𝑡|]
= max (3,3)=3
Dm= 4