0% found this document useful (0 votes)
18 views29 pages

Digital Image Processing Fundamental (Chapter-02)

Chapter 2 covers the fundamentals of digital images, including definitions of key terms such as pixel, adjacency, and various types of distance functions. It explains concepts like image sampling and quantization, which are essential for converting continuous images into digital formats. Additionally, the chapter discusses image formation in the human eye, techniques for image acquisition, and the importance of connectivity and boundaries in image analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views29 pages

Digital Image Processing Fundamental (Chapter-02)

Chapter 2 covers the fundamentals of digital images, including definitions of key terms such as pixel, adjacency, and various types of distance functions. It explains concepts like image sampling and quantization, which are essential for converting continuous images into digital formats. Additionally, the chapter discusses image formation in the human eye, techniques for image acquisition, and the importance of connectivity and boundaries in image analysis.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter 2

Digital image fundamental

No Question Year

1 Define the following terms:

(i) Pixel
a. Neighbor of Pixel
b. 4- neighbor
c. Diagonal Neighbor
d. 8 Neighbor
(ii) Adjacency
a. 4- adjacency
b. 8 – adjacency
c. M-adjacency
(iii) Path
(iv) Closed path
(v) Region
(vi) Boundary
(vii) Connectivity
(viii) Binary Image
(ix) Distance Function
(x) Euclidean Distance
(xi) City0block Distance/ D4 distance
(xii) Chessboard Distance / D8 distance
(xiii) Mask
(xiv) Werber ration
2 Define image sampling and quantization. 2020,2018,201
5,2011

3 Explain image sampling and quantization with example. 2017,2015

4 Explain the process of image formation in the eye. 2018,2016,201


5,2013
Or, Describe the principle of image formation in human eye.
2011,2010

5 How digital image are represented? Explain in brief.


Or, How can you represent an image?

6 Define aliasing. Explain the image formation functional model. 2021,2018,201


4
Or, Explain a simple image formation model.

20111
7 What are differences between photopic and scotopic vision? 2019,2018,201
4,2011

8 Explain the sensor strip technique for image acquisition. 2012

Or, discuss three technique for image acquisition.

9 You might have notice that emergency vehicles such as ambulances 2013
are often labeled on the front hood with reversed lettering (e.g
ECNALUBMA ). Explain why this is so.

Or, Why AMBULANCE is written as ECNALUBMA on


emergency vehicles?

10 2020
An image segment is shown:

P(0,0)

2 3 2 6 1

6 2 3 6 2

5 3 2 3 5

2 4 3 5 2

4 5 2 3 6

q (4,4)

Let V be the set of gray level value used to define connectivity in


the image compute D4, D8 and Dm distances between pixels ‘P’ and
‘q’ for (i) V= [2,3] (ii) [2,6]
11 Consider the two image subsets S1 and S2 shown in the following 2018,2014
figure:

SI S2

0 0 0 0 0 0 0 1 1 0
1 0 0 1 0 0 1 0 0 1
1 0 0 1 0 1 1 0 0 0
0 0 1 1 1 0 0 0 0 0
0 0 1 1 1 0 0 1 1
1

For V= {1} determine whether these two subsets are

a) 4-adjacent
b) 8-adjacent and
c) m-adjacent
12 Calculate (i) Euclidean 2019

(ii) city block

(iii) Chess board distance

-between spatial location P(5,5) and Q(20,20)

13 Show the M-adjacent path from top-right to the bottom left pixel of 2019
the following image using the set of similar pixels.
[2019]
V= {125, 126, 127, 128}

124 124 127 125

124 127 126 124

124 128 124 123

126 127 124 124

14 An image segment is shown below:


Let v be the set of gray level valued used to define connectivity in
the image.

Compute D4, D8 and Dm distance.

P(0,0)

2 3 2 6 1

6 2 3 6 2

5 3 2 3 5

2 4 3 5 2

4 5 2 3 6

q(4,4)

15 Consider the image segment as shown below-

3 1 2 1(q)

2 2 0 2

1 2 1 1

1(p) 0 1 1

Let V={0,1}; compute the D4,D8 and Dm distances between p and


q.
Chapter 2 Page No: 268
Digital image fundamental

1. Define the following terms:

(i) Pixel

Answer: A digital image is composed of a finite number of elements, each of which


has a particular location and value. These elements are referred to as picture
elements, image elements, pels and pixels.

Origin
0 1 N-1
y
0

One pixel
M-1

(ii) Neighbor of Pixel

Answer: The neighborhood of a pixel is the collection of pixels which


surround it. The neighborhood of a pixel is required for operations such
as morphology, edge detection, median filter, etc.

a) 4- neighbor:
y

(x,y+1)
(x-1,y) P (x+1,y)
(x,y)
(x,y-1)

A pixel p at coordinates (x,y) has four horizontal and vertical neighbors whose
coordinates are given by (x+1,y), (x-1,y), (x,y+1), (x,y-1)

This set of pixels, called the 4-neighbors of p, is denoted by N4(P).

Each pixel is a unit distance from (x,y), and some of the neighbors of P lie outside
the digital image, if (x,y) is on the border of the image.

b) Diagonal Neighbor:
The 4-diagonal neighbors of P have coordinates (x+1, y+1), (x+1, y-1), (x-1, y+1),
(x-1, y-1) and are denoted by ND(P)

(x-1,y+1) (x+1,y+1)
P
(x,y)
(x-1,y-1) (x+1,y-1)

Fig: Diagonal neighbor of pixel.

c) 8 Neighbor:
d)
P

(x,y)

The 4-neighbors of P and 4-diagonal neighbor of P together, is called 8-neighbors of


P, and is denoted by N8(P) = (N4(P)+ ND (P)

(iii) Adjacency:

Two pixels are adjacent if they are neighbors and their intensity level ‘V’
satisfy some specific criteria of similarity.

e.g. V = {1}

V = { 0, 2}

Binary image = { 0, 1}

Gray scale image = { 0, 1, 2, ------, 255}

In binary images, 2 pixels are adjacent if they are neighbors & have some intensity
values either 0 or 1.

In gray scale, image contains more gray level values in range 0 to 255.

Types of Adjacency: There are three types of Adjacency

1) 4-adjacency

2) 8-adjacency

3) m-adjacency

a) 4- adjacency:
Two pixels p and q with values from V are 4-adjacent if q is in the set N4(p).
A B C D

   

E Q P F

   

G H I J

   

K L M N

   

Let V = {255, 20, 10p, 21q, 100}

 N4(P) = {C, I, F, Q}

b) 8 – adjacency:

Two pixels p and q with values from V are 8-adjacent if q is in the set N8 (P).

A B C D

   

E Q P F

   

G H I J

   

K L M N

   

Let V = {255, 20p, 25p, 125, 235, 240, 130, 140, 122, 250, 210, 205}
 N8(P) = {B, C, D, Q, F, H, I, J}

c) M-adjacency :
Two pixels p and q with values from v are m-adjacent if

i) q is in N4(p)

or, ii) q is in ND (p) and the set N4(p) n N4 (q) has no Mixed adjacency is a
modification of 8-adjacency.

(iv) Path

A Path from pixel p with coordinates (x,y) to pixed q with coordinates (s,t) is a
sequence of distinct pixels with coordinates(x0, y0), (x1,y1),............, (xn, yn)
.Where (x0, y0) = (x,y), (xn, yn) = (s,t), and

pixels (xi, yi) and (xi-1, yi-1) are adjacent for 1 i n.

In this case, n is the length of the path.

    q(s,t)

   

   
p(x,y)
   

(v) Closed path

If there is a sequence of distinct n pixels with coordinates

(x0, y0), (x1,y1), ..............., (xn, yn)

and if (x0, y0) = (xn, yn), i.e., the start and end is same, called closed path.

   

   

   
   

(vi) Region

Let R be a subset of pixels in an image.

R is a Region of the image if R is a connected set.

Her, The pixels neighbors and their gray level are equal

(vii) Boundary

Let R be a subset of pixels in an image.


The Boundary (border or contour) of a region R is the set of pixels in the region
that have one or more neighbors that are not in R.

i.e., The intensity level of the entire image is different from the intensity level of
the outside image.

(viii) Connectivity

Two pixels are said to be connected if they are adjacent and must have following
criteria

i) They are neighbor

ii) Their gray-levels are equal.

(ix) Binary Image

If the gray-level treated as where 0 to 125 considered as 0

and 126 to 255 considered as 1

then, this is referred to as Binary Image

e.g., Let V = {125, 100, 0, 126.................255}

(x) Distance Function

For pixels p, q, and z, with coordinates (x,y), (s,t), and (v,w), respectively, D is a
Distance Function or metric if

i) D (p,q)  0 (D (p,q) = 0 iff p = q)

ii) D (p,q) = D (q,p)

and iii) D(p,z)  D (p,q) + D (q,z)


(xi) Euclidean Distance

The Euclidian distance between p and q is defined as

𝐷𝑒 = √(𝑥 − 𝑠)2 + (𝑦 − 𝑡)2

In this approach, the pixels having a distance less than or equal to some value r
from (x,y) are the points contained in a disk of radius r centered at (x,y).

(xii) City-block Distance/ D4 distance

The D4 Distance (or city-block distance) between pixels p and q is defined as

D4 (p,q) = x-s +y-t

In this case, the pixels having a D4 distance 2 from (x,y) (the center point) form
the following contours of constant distance.

2
2 1 2
2 1 0 1 2
2 1 2
2

The pixels with D4 = 1 are the 4-neighbors of (x,y)

(xiii) Chessboard Distance / D8 distance

The D8 distance between pixels p and q is defined as

D8 (p,q) = max (x-s +y-t)

In this case, the pixels with D8 distance from (x,y) less than or equal to some
value r from a square centered at (x,y) e.g., The pixels with D8 distance  2 from
(x,y) (the center point) form the following contours of constant distance.

2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2
The pixels with D8 = 1 are the 8-neighbors of (x,y)

(xiv) Mask

A mask is a filter. Concept of masking is also known as spatial filtering. Masking


is also known as filtering. In this concept we just deal with the filtering operation
that is performed directly on the image.

A sample mask has been shown below:

-1 0 1

-1 0 1

-1 0 1

(xv) Weber ration

The quantity
ΔI/Ic, where Ic is the increment of illumination discriminable 50% of the time with
background illumination I, is called the Weber ratio. A small value of
I I c means that a small percentage change in intensity is discriminable. This
represents “good” brightness discrimination.
2. Define image sampling and quantization. (2020,2018,2015,2011)
Answer:

Sampling
Image sampling is the process of converting a continuous image into a discrete image by selecting a grid
of pixels at regular intervals. This determines the spatial resolution of the digital image. Digitizing the co-
ordinate value is called sampling.

Quantization
Image quantization is the process of mapping the sampled image's pixel values to a limited number of
discrete intensity levels. This reduces the number of distinct colors or gray levels in the image, determining

its color depth. Digitizing the amplitude value is called quantization.

3. Explain image sampling and quantization with example. (2017,2015)


Or, Explain the sampling and quantization process from an analog image to digitized image.
Answer:
In Digital Image Processing, signals captured from the physical world need to be translated into digital form
by “Digitization” Process. In order to become suitable for digital processing, an image function f(x,y) must
be digitized both spatially and in amplitude. This digitization process involves two main processes called

• Sampling: Digitizing the co-ordinate value is called sampling.


• Quantization: Digitizing the amplitude value is called quantization
Typically, a frame grabber or digitizer is used to sample and quantize the analogue video signal.

Sampling

Since an analogue image is continuous not just in its co-ordinates (x axis), but also in its amplitude (y axis),
so the part that deals with the digitizing of co-ordinates is known as sampling. In digitizing sampling is
done on independent variable. In case of equation y = sin(x), it is done on x variable.

When looking at this image, we can see there are some random variations in the signal caused by noise. In
sampling we reduce this noise by taking samples. It is obvious that more samples we take, the quality of
the image would be more better, the noise would be more removed and same happens vice versa. However,
if you take sampling on the x axis, the signal is not converted to digital format, unless you take sampling
of the y-axis too which is known as quantization.

Sampling has a relationship with image pixels. The total number of pixels in an image can be calculated as
Pixels = total no of rows * total no of columns. For example, let’s say we have total of 36 pixels, that means
we have a square image of 6X 6. As we know in sampling, that more samples eventually result in more
pixels. So it means that of our continuous signal, we have taken 36 samples on x axis. That refers to 36
pixels of this image. Also the number sample is directly equal to the number of sensors on CCD array.
Here is an example for image sampling and how it can be represented using a graph.

Quantization

Quantization is opposite to sampling because it is done on “y axis” while sampling is done on “x axis”.
Quantization is a process of transforming a real valued sampled image to one taking only a finite number
of distinct values. Under quantization process the amplitude values of the image are digitized. In simple
words, when you are quantizing an image, you are actually dividing a signal into quanta(partitions).

Now let’s see how quantization is done. Here we assign levels to the values generated by sampling process.
In the image showed in sampling explanation, although the samples has been taken, but they were still
spanning vertically to a continuous range of gray level values. In the image shown below, these vertically
ranging values have been quantized into 5 different levels or partitions. Ranging from 0 black to 4 white.
This level could vary according to the type of image you want.
There is a relationship between Quantization with gray level resolution. The above quantized image
represents 5 different levels of gray and that means the image formed from this signal, would only have 5
different colors. It would be a black and white image more or less with some colors of gray.

When we want to improve the quality of image, we can increase the levels assign to the sampled image. If
we increase this level to 256, it means we have a gray scale image. Whatever the level which we assign is
called as the gray level. Most digital IP devices uses quantization into k equal intervals. If b-bits per pixel

are used,

The number of quantization levels should be high enough for human perception of fine shading details in
the image. The occurrence of false contours is the main problem in image which has been quantized with
insufficient brightness levels. Here is an example for image quantization process.

3. Differentiate between Sampling and Quantization.

Answer:

Sampling Quantization

Digitization of co-ordinate values. Digitization of amplitude values.

x-axis(time) – discretized. x-axis(time) – continuous.

y-axis(amplitude) – continuous. y-axis(amplitude) – discretized.

Sampling is done prior to the quantization Quantization is done after the sampling
process. process.

It determines the spatial resolution of the It determines the number of grey levels in the
digitized images. digitized images.

It reduces c.c. to a series of tent poles over It reduces c.c. to a continuous series of stair
a time. steps.

A single amplitude value is selected from Values representing the time intervals are
different values of the time interval to rounded off to create a defined set of possible
represent it. amplitude values.
4. Explain the process of image formation in the eye.
Or, Describe the principle of image formation in human eye. (2018,2016,2015,2013, 2011,2010)
Answer:

Image Formation in the Eye


The eye is considered by most neuroscientists as actually part of the brain. It consists of a small spherical
globe of about 2cm in diameter, which is free to rotate under the control of 6 extrinsic muscles. Light enters
the eye through the transparent cornea, passes through the aqueous humor, the lens, and the vitreous humor,
where it finally forms an image on the retina
Fig: Graphical representation of the eye looking at a palm tree.
In human eye, the radius of curvature of the anterior surface of the Lense is greater than the radius of its

posterior surface. The shape of the lens is controlled by tension in the fibers of the ciliary body.

❑ To focus on distant objects, the controlling muscles cause the lens to be relatively flattened.

❑ Similarly, these muscles allow the lens to become thicker in order to focus on objects near the eye.

❑ The distance between the center of the lens and the retina (called focal length) varies from approximately
17mm to about 14mm, as the refractive power of the lens increases from its minimum to its maximum.

❑ When the eye focuses on an object farther away than about 3m, the lens exhibits its lowest refractive
power.

When the eye focuses on a nearby object, the lens is most strongly refractive.

This information makes it easy to calculate the size of the retinal image of any object.

❑ e.g., The observer is looking at a tree 15m high at a distance of 100m.

If h is the height in mm of that object in the retinal image, the geometry of the above fig yields

15 h
=
100 17

 h = 2.55mm
The retinal image is reflected primarily in the area of the fovea, and transformed the radiant energy into
electrical impulses that are ultimately decoded by the brain.

5. How digital image are represented? Explain in brief.


Or, How can you represent an image?

Answer:

Representing Digital Images

Origin

0 1 2 3 ....................................... N-1
y
0

One pixel f(x, y)


Fig: Coordinate represent digital images
x

Assume that an image f(x, y) is sampled so that the resulting digital image has M rows and N columns.
The values of the coordinates (x, y) now become discrete quantities. We use integer values for these
discrete coordinates.
Thus, the values of the coordinates at the origin are (x, y) = (0, 0). The next coordinate values along the
first row of the image are represented as (x, y) = (0, 1).
The complete MN digital image in the following compact matrix form:
 f (0,0) f (0,1) ... f (0, N − 1) 
 
 f (1,0) f (1,1) ... f (1, N − 1) 
f ( x , y) =  
 ... ... ... ... 
 
f (M − 1,0) f (M − 1,1) ... f (M − 1, N − 1)
The right side of this eq (1) is by definition a digital image. Each element of this matrix array is called an
image element, picture element, pixel, or pel.
❑ A more traditional matrix notation to denote a digital image and its elements:
 a 0, 0 a 0,1 ... a 0, N −1 
 
 a 1,0 a 1,1 ... a 1, N −1 
A= 
 ... ... ... ... 
 
a M −1,0 a M −1,1 ... a M −1, N −1 
clearly, aij = f(x=i, y=j) = f(i,j)
So, eq (1) and eq(2) are identical matrices.
Due to processing, storage, and sampling hardware considerations, the number of gray levels typically an
integer power of 2.
L = 2k .......................(3)
The discrete levels are equally spaced and interval is [0, L-1]
To store a digital image, the number of bits,
b = M NK................. (4)
When M = N
b = N2k
An image can have 2k gray levels, refer to as ‘k-bit image’.

6. Define aliasing. Explain the image formation functional model.


Or, Explain a simple image formation model. (2021,2018, 2014, 2011)
Answer:

Aliasing:
Aliasing is an effect that causes different signals to become indistinguishable from each other during
sampling. Aliasing is characterized by the altering of output compared to the original signal because
resampling or interpolation resulted in a lower resolution in images, a slower frame rate in terms of video
or a lower wave resolution in audio. Anti-aliasing filters can be used to correct this problem.

A Simple Image Formation Model


An image is a two-dimensional function is of the form f(x,y)
The value or amplitude of f at spatial coordinates (x,y) is a positive scalarly quantity.
When an image is generated from a physical process, its values are proportional to energy, radiated by a
physical source (e.g., electromagnetic waves).
f(x,y) must be non-zero and finite:
i.e., 0 < f(x,y) < ........................ (1)
The function f(x,y) may be characterized by two components:
1) The amount of source illumination incident on the scene being viewed
 Illumination, denoted by i(x,y)
2) The amount of illumination reflected by the objects in the scene
 Reflectance, denoted by r(x,y) combining two functions as a product to form f(x,y)
f(x,y) = i(x,y) r (x,y) ................(2)
Where, 0< i(x,y) <  ..................... (3)
and 0 < r (x,y) < 1 ................ (4)
Reflectance is bounded by 0 (total absorption) and 1 (total reflectance)
The nature of i(x,y) is determined by the illumination source
and r(x,y) is determined by the characteristics of the imaged object.
Here, transmissivity is used instead of reflectivity function.
Typical ranges of i(x,y) for visible light
Illumination source Condition Value
sun clear sunny day 90,000 lm/m2
cloudy day 10,000 lm/m2
commercial area 1,000 lm/m2
Moon clear Evening 0.1 lm/m2

❑ Typical values of r(x,y)


black velvet 0.01
stainless steel 0.65
flat-white wall paint 0.80
silver-plated metal 0.90
snow 0.93
❑ The intensity of a monochrome image at any coordinates (x0, y0) the gray level (l) of the image at that
point. That is,
l = f (x0, y0) .................... (5)
It is evident that l lies in the range
Lmin  l  Lmax ............... (6)
Where, Lmin be positive
and Lmax be finite
and, Lmix = imin rmin
Lmax = imax rmax
❑ Typical limits for indoor values in the absence of additional illumination
Lmin  10
and Lmax  1000
❑ The interval [Lmin, Lmax] is called the Gray Scale.
Numerically, it becomes [0, L-1]
Wher l=0 is considered as black
and l = L-1 is considered as white
All intermediate values are shades of gray varying from black to white.

7. What are differences between photopic and scotopic vision? (2019,18,14,11)

Answer:

Features Photopic vision Scotopic Vision

Light Conditions Bright light (daylight or well-lit Low light (nighttime or dim
environments) environments)
Primary Photoreceptors Cones Rods

Sensitivity to Light Less sensitive (requires more Highly sensitive (requires less
light) light)

Color Perception Full color vision (red, green, No color perception (black and
blue) white)

Acuity (Sharpness) High visual acuity Low visual acuity

Peak Sensitivity Wavelength Approximately 555 nm (green) Approximately 507 nm (blue-


green)

Adaptation Speed Quick adaptation to changing Slow adaptation to darkness


light

Number of Receptors Fewer cones (approximately 6-7 More rods (approximately 120
million) million)

Location in Retina Concentrated in the fovea Distributed throughout the


(central retina) peripheral retina

Visual Tasks Detailed tasks (reading, Peripheral vision and motion


recognizing faces) detection

8. Explain the sensor strip technique for image acquisition.


Or, discuss three technique for image acquisition. (2012)
Answer:

(1)Image Acquisition Using a Single Sensor:


Figure 5.1 (a) shows the components of a single sensor. Perhaps the most familiar sensor of this type is the
photodiode, which is constructed of silicon materials and whose output voltage waveform is proportional
to light. The use of a filter in front of a sensor improves selectivity. For example, a green (pass) filter in
front of a light sensor favors light in the green band of the color spectrum. As a consequence, the sensor
output will be stronger for green light than for other components in the visible spectrum
In order to generate a 2-D image using a single sensor, there has to be relative displacements in both the x-
and y-directions between the sensor and the area to be imaged. Figure 5.2 shows an arrangement used in
high-precision scanning, where a film negative is mounted onto a drum whose mechanical rotation provides
displacement in one dimension. The single sensor is mounted on a lead screw that provides motion in the
perpendicular direction. Since mechanical motion can be controlled with high precision, this method is an
inexpensive (but slow) way to
obtain high-resolution images. Other similar mechanical arrangements use a flat bed, with the sensor
moving in two linear directions. These types of mechanical digitizers sometimes are referred to as micro
densitometers.
Fig.5.2. Combining a single sensor with motion to generate a 2-D image

(2) Image Acquisition Using Sensor Strips:


A geometry that is used much more frequently than single sensors consists of an in-line arrangement of
sensors in the form of a sensor strip, as Fig. 5.1 (b) shows. The strip provides imaging elements in one
direction. Motion perpendicular to the strip provides imaging in the other direction, as shown in Fig. 5.3
(a).This is the type of arrangement used in most flat bed scanners. Sensing devices with 4000 or more in-
line sensors are possible. In-line sensors are used routinely in airborne imaging applications, in which the
imaging system is mounted on an aircraft that flies at a constant altitude and speed over the geographical
area to be imaged. One-dimensional imaging sensor strips that respond to various bands of the
electromagnetic spectrum are mounted perpendicular to the direction of flight. The imaging strip gives one
line of an image at a time, and the motion of the strip completes the other dimension of a two-dimensional
image. Lenses or other focusing schemes are used to project the area to be scanned onto the sensors. Sensor
strips mounted in a ring configuration are used in medical and industrial imaging to obtain cross-sectional
(“slice”) images of 3-D objects, as Fig. 5.3 (b) shows. A rotating X-ray source provides illumination and
the portion of the sensors opposite the source collect the X-ray energy that pass through the object (the
sensors obviously have to be sensitive to X-ray energy).This is the basis for medical and industrial
computerized axial tomography (CAT). It is important to note that the output of the sensors must be
processed by reconstruction algorithms whose objective is to transform the sensed data into meaningful
cross-sectional images.
In other words, images are not obtained directly from the sensors by motion alone; they require extensive
processing. A 3-D digital volume consisting of stacked images is generated as the object is moved in a
direction perpendicular to the sensor ring. Other modalities of imaging based on the CAT principle include
magnetic resonance imaging (MRI) and positron emission tomography (PET).The illumination sources,
sensors, and types of images are different, but conceptually they are very similar to the basic imaging
approach shown in Fig. 5.3 (b).
Fig.5.3 (a) Image acquisition using a linear sensor strip (b) Image acquisition using a circular sensor strip.

(3) Image Acquisition Using Sensor Arrays:


Figure 5.1 (c) shows individual sensors arranged in the form of a 2-D array. Numerous electromagnetic and
some ultrasonic sensing devices frequently are arranged in an array format. This is also the predominant
arrangement found in digital cameras. A typical sensor for these cameras is a CCD array, which can be
manufactured with a broad range of sensing properties and can be packaged in rugged arrays of 4000 *
4000 elements or more. CCD sensors are used widely in digital cameras and other light sensing instruments.
The response of each sensor is
proportional to the integral of the light energy projected onto the surface of the sensor, a property that is
used in astronomical and other applications requiring low noise images. Noise reduction is achieved by
letting the sensor integrate the input light signal over minutes or even hours. Since the sensor array shown
in Fig. 5.4 (c) is two dimensional, its key advantage is that a complete image can be obtained by focusing
the energy pattern onto the surface of the array. The principal
manner in which array sensors are used is shown in Fig.5.4. This figure shows the energy from an
illumination source being reflected from a scene element, but, as mentioned at the beginning of this section,
the energy also could be transmitted through the scene elements. The first function performed by the
imaging system shown in Fig.5.4 (c) is to collect the incoming energy and focus it onto an image plane. If
the illumination is light, the front end of the imaging system is a lens, which projects the viewed scene onto
the lens focal plane, as Fig. 2.15(d) shows. The sensor array, which is coincident with the focal plane,
produces outputs proportional to the integral of the light received at each sensor. Digital and analog circuitry
sweep these outputs and converts them to a video signal, which is then digitized by another section of the
imaging system. The output is a digital image, as shown diagrammatically in Fig. 5.4 (e).
Fig.5.4 An example of the digital image acquisition process (a) Energy (“illumination”) source (b) An
element of a scene (c) Imaging system (d) Projection of the scene onto the image plane (e) Digitized image

9. You might have notice that emergency vehicles such as ambulances are often labeled on the front
hood with reversed lettering (e.g ECNALUBMA ). Explain why this is so. (2013)
Or, Why AMBULANCE is written as ECNALUBMA on emergency vehicles?
Answer:

While the original purpose of the reversed 'AMBULANCE' was better readability, advances in lighting,
safety design, and understanding of visual cognition have rendered this lettering of secondary importance.

Some issues with the reverse AMBULANCE signage:

• The word is long and difficult to read in rear-view mirrors.


• It presents problems for multicultural/multilingual cities/towns.
• There are far more effective ways to visually communicate an emergency vehicle.
• When ambulances need to be noticed, flashing lights, sound, and sign coloring are more
effective than lettering.

Moreover, The word "AMBULANCE" is written as "ECNALUBMA" on the front of emergency vehicles
to ensure it is easily readable by drivers in their rearview mirrors. Here’s a detailed explanation:

1. Reflection in Mirrors:
- When text is viewed through a mirror, it appears reversed. This means that if "AMBULANCE" were
written normally on the front of the vehicle, it would appear as "ECNALUBMA" in the rearview mirror,
making it difficult to read quickly.

2. Reversed Writing:

- By writing "AMBULANCE" as "ECNALUBMA," the text appears correctly oriented when viewed in
a rearview mirror. This ensures that drivers can immediately recognize the word "AMBULANCE" and
understand that an emergency vehicle is approaching, prompting them to move aside.

This practice enhances safety and response time by ensuring that drivers can quickly and easily recognize
an ambulance approaching from behind, allowing them to yield the right of way more efficiently.

10. An image segment is shown:

P(0,0)

2 3 2 6 1

6 2 3 6 2

5 3 2 3 5

2 4 3 5 2

4 5 2 3 6

q (4,4)

Let V be the set of gray level value used to define connectivity in the image compute D 4, D8 and Dm
distances between pixels ‘P’ and ‘q’ for (i) V= [2,3] (ii) [2,6]

Answer:

D4 (City block):
Let P(x,y) and q(s,t)
We know that,
D4= |𝑥 − 𝑠| + |𝑦 − 𝑡|
= |0 − 4| + |0 − 4| = 4+4=8

D8 (cross-board):
We know that
D8= max [|𝑥 − 𝑠| + |𝑦 − 𝑡|]
= max (4,4)=4

Dm Distance:
For V= [2,3] and V= [2,6]
Dm path doesn’t exist.
11. Consider the two image subsets S1 and S2 shown in the following figure:

SI S2
0 0
0 0 0 0 0 0 1 1
1 1
0 0 1 0 0 1 0 0
1 0
0 0 1 0 1 1 0 0
0 0
0 1 1 1 0 0 0 0
0 1
0 1 1 1 0 0 1
1

For V= {1} determine whether these two subsets are (2018,2014)

a) 4-adjacent
b) 8-adjacent and m-adjacent

Answer:
Let p and q be as shown in Figure. Then,
(i) S1 and S2 are not 4-conntected because q is not in the set N4(p).
(ii) S1 and S2 are 8-connected because q is in the set N8(p).
(iii) S1 and S2 are m-connected because:
a) Q is in ND(p) and
b) The set N4(p)\N4(q) is empty.

12. Calculate (2019)

(i) Euclidean
(ii) city block
(iii) Chess board distance -between spatial location P(5,5) and Q(20,20)
Answer:

(i) Euclidean= √(5 − 20)2 + (5 − 20)2


=√450 = 21.21
(ii) City Block= |5 − 20| + |5 − 20|
=15+15 =30
(iii) Chess board= max(|5 − 20| + |5 − 20|)
= Max (15,15) =15

13. Show the M-adjacent path from top-right to the bottom left pixel of the following image using the
set of similar pixels. (2019)
V= {𝟏𝟐𝟓, 𝟏𝟐𝟔, 𝟏𝟐𝟕, 𝟏𝟐𝟖}
124 124 127 125

124 127 126 124

124 128 124 123

126 127 124 124

Answer:
124 124 127 125

124 127 126 124

124 128 124 123

126 127 124 124

M-adjacent path (Dm) = 6

14. Consider the image segment as shown below-

3 1 2 1(q)

2 2 0 2

1 2 1 1

1(p) 0 1 1

Let V={0,1}; compute the D4,D8 and Dm distances between p and q.


Answer:

D4(p,q)=|x-s|+|y-t|
=|0-3|+|0-3|
=6 unit
D8 = max [|𝑥 − 𝑠| + |𝑦 − 𝑡|]
= max (3,3)=3
Dm= 4

You might also like