100% found this document useful (1 vote)
2K views

Digital Image Processing Tutorial Questions Answers

This document provides tutorial questions and answers on digital image processing. It includes 2-mark questions defining key terms like image, dynamic range, brightness, gray level, color model, and pixel. It also lists hardware oriented color models, applications of color models, and steps involved in digital image processing. The 16-mark questions explain brightness adaptation and discrimination in detail, and sampling and quantization which involve spatially digitizing image coordinates and quantizing amplitude values into discrete gray levels. Reducing gray levels can cause false contouring artifacts.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views

Digital Image Processing Tutorial Questions Answers

This document provides tutorial questions and answers on digital image processing. It includes 2-mark questions defining key terms like image, dynamic range, brightness, gray level, color model, and pixel. It also lists hardware oriented color models, applications of color models, and steps involved in digital image processing. The 16-mark questions explain brightness adaptation and discrimination in detail, and sampling and quantization which involve spatially digitizing image coordinates and quantizing amplitude values into discrete gray levels. Reducing gray levels can cause false contouring artifacts.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Tutorial Questions

On

DIGITAL IMAGE PROCESSING


(2marks and 16 marks questions with answers)

PART- 2-Marks Questions


1. Define Image
An image may be defined as two dimensional light intensity function
f(x, y) where x and y denote spatial co-ordinate and the amplitude or
value of f at any point

(x, y) is called intensity or grayscale or

brightness of the image at that point.


2. What is Dynamic Range?
The range of values spanned by the gray scale is called dynamic range
of an image. Image will have high contrast, if the dynamic range is
high and image will have dull washed out gray look if the dynamic
range is low.
3. Define Brightness
Brightness of an object is the perceived luminance of the surround. Two
objects with different surroundings would have identical luminance but
different brightness.
5. What do you meant by Gray level?
Gray level refers to a scalar measure of intensity that ranges from
black to grays and finally to white.
6. What do you meant by Color model?
A Color model is a specification of 3D-coordinates system and a
subspace within that system where each color is represented by a
single point.
7. List the hardware oriented color models
1. RGB model
2. CMY model
3. YIQ model
4. HSI model
8. What is Hue and saturation?
Hue is a color attribute that describes a pure color where saturation
gives a measure of the degree to which a pure color is diluted by white
light.
9. List the applications of color models
1. RGB model--- used for color monitors & color video camera

2. CMY model---used for color printing


3. HIS model----used for color image processing
4. YIQ model---used for color picture transmission
10. What is Chromatic Adoption?
`The hue of a perceived color depends on the adoption of the viewer.
For example, the American Flag will not immediately appear red, white,
and blue of the viewer has been subjected to high intensity red light
before viewing the flag. The color of the flag will appear to shift in hue
toward the red component cyan.
11. Define Resolutions
Resolution is defined as the smallest number of discernible detail in an
image. Spatial resolution is the smallest discernible detail in an image
and gray level resolution refers to the smallest discernible change is
gray level.
12. What is meant by pixel?
A digital image is composed of a finite number of elements each of
which has a particular location or value. These elements are referred to
as pixels or image elements or picture elements or pels elements.
13. Define Digital image
When x, y and the amplitude values of f all are finite discrete quantities
, we call the image digital image.
14. What are the steps involved in DIP?
1. Image Acquisition
2. Preprocessing
3. Segmentation
4. Representation and Description
5. Recognition and Interpretation
15. What is recognition and Interpretation?
Recognition means is a process that assigns a label to an object based
on the information provided by its descriptors. Interpretation means
assigning meaning to a recognized object
16. Specify the elements of DIP system
1. Image Acquisition

2. Storage
3. Processing
4. Display
17. List the categories of digital storage
1. Short term storage for use during processing.
2. Online storage for relatively fast recall.
3. Archival storage for infrequent access.
18. What are the types of light receptors?
The two types of light receptors are

Cones and

Rods

19. Differentiate photopic and scotopic vision


Photopic vision
Scotopic vision
1. The human being can Several
rods

are

resolve the fine details with connected to one nerve


these cones because each end.

So

one is connected to its own overall


nerve end.

it

gives

picture

of

the
the

image.

2. This is also known as


bright light vision.

This is also known as

thin light vision.


20. How cones and rods are distributed in retina?
In each eye, cones are in the range 6-7 million and rods are in the
range 75-150 million.
21. Define subjective brightness and brightness adaptation
Subjective brightness means intensity as preserved by the human
visual system.Brightness adaptation means the human visual system
can operate only from scotopic to glare limit. It cannot operate over
the range simultaneously. It accomplishes this large variation by
changes in its overall intensity.
22. Define weber ratio
The ratio of increment of illumination to background of illumination is
called as weber ratio.(ie) i/i. If the ratio (i/i) is small, then small

percentage of change in intensity is needed (ie) good brightness


adaptation. If the ratio (i/i) is large , then large percentage of change
in intensity is needed (ie) poor brightness adaptation.
23. What is meant by machband effect?
Machband effect means the intensity of the stripes is constant.
Therefore it preserves the brightness pattern near the boundaries,
these bands are called as machband effect.
24. What is simultaneous contrast?
The region reserved brightness not depend on its intensity but also on
its background. All centre square have same intensity. However they
appear to the eye to become darker as the background becomes
lighter.
25. What is meant by illumination and reflectance?
Illumination is the amount of source light incident on the scene. It is
represented as i(x, y).Reflectance is the amount of light reflected by
the object in the scene. It is represented by r(x, y).
26. Define sampling and quantization
Sampling means digitizing the co-ordinate value (x, y). Quantization
means digitizing the amplitude value.
27. Find the number of bits required to store a 256 X 256
image with 32 gray levels
32 gray levels = 25
= 5 bits
256 * 256 * 5 = 327680 bits.
28. Write the expression to find the number of bits to store a
digital image?
The number of bits required to store a digital image is
b=M X N X k
When M=N, this equation becomes
b=N^2k
29. Write short notes on neighbors of a pixel.

The pixel p at co-ordinates (x, y) has 4 neighbors (ie) 2 horizontal and


2 vertical neighbors whose co-ordinates is given by (x+1, y), (x-1,y),
(x,y-1), (x, y+1). This is called as direct neighbors. It is denoted by
N4(P)
Four diagonal neighbors of p have co-ordinates (x+1, y+1), (x+1,y-1),
(x-1, y-1), (x-1, y+1). It is denoted by ND(4).
Eight neighbors of p denoted by N 8(P) is a combination of 4 direct
neighbors and 4 diagonal neighbors.
30. Explain the types of connectivity.
1. 4 connectivity
2. 8 connectivity
3. M connectivity (mixed connectivity)
31. What is meant by path?
Path from pixel p with co-ordinates (x, y) to pixel q with co-ordinates
(s,t) is a sequence of distinct pixels with co-ordinates.
32. Give the formula for calculating D4 and D8 distance.
D4 distance ( city block distance) is defined by
D4(p, q) = |x-s| + |y-t|
D8 distance(chess board distance) is defined by
D8(p, q) = max(|x-s|, |y-t|).
33. What is geometric transformation?
Transformation is used to alter the co-ordinate description of image.
The basic geometric transformations are
1.

Image translation

2.

Scaling

3.

Image rotation
34. What is image translation and scaling?
Image translation means reposition the image from one co-ordinate
location to another along straight line path. Scaling is used to alter the
size of the object or image (ie) a co-ordinate system is scaled by a
factor.
35. Define the term Luminance

Luminance measured in lumens (lm), gives a measure of the


amount of energy an observer perceiver from a light source.

PART-B- 16 Marks Questions


1. Explain Brightness adaptation and Discrimination
The digital images are displayed as a discrete set of intensities, the
eyes ability

to discriminate between different intensity levels.

Subjective brightness is a logarithmic function of the light intensity


incident on the eye. The long solid curve represents the range of
intensities t o which the visual system can adapt. In photopic vision
alone the range is about 10^6.It accomplishes the large variation by
changes in its overall sensitivity phenomenon is known as brightness
adaptation.The eyes ability to discriminate between different intensity
levels at any specific adaptation.

I+ I

The eye is capable of detecting contouring effects in


Image

whose

overall

intensity

is

represented

by

monochrome
fewer

than

approximately two dozen levels. The second phenomenon called


simultaneous contrast is related to the fact that a regions perceived
brightness does not depend on its intensity. They app ear to the eye
become dark eras the background gets lighter.
2.Explain sampling and quantization:
For computer processing, the image function f(x,y)must be digitized
both spatially and in amplitude. Digitization of spatial co-ordinates is
called image sampling and amplitude digitization is called grey level
quantization.
Sampling:
Consider a digital image of size 1024*1024,256 with a display area
used for the image being the same ,the pixels in the lower resolution
images where duplicated inorder to fulfill the entire display .the pixel
replication produced a checker board effect, which is visible in the
image of lower resolution .it is not possible to differentiate a 512*512
images from a1024*1024 under this effect. but a slight increase in
grainess and a small decrease in sharpness is noted.
A 256*256 image shows a fine checker board pattern in the edges and
more pronounced grainess there out the image .these effect is much
more visible in 128*128 images and it becomes quite pronounced in
64*64 and 32*32 images.
Quantization:
It discusses the effects produced when the number of bits used to
represent the grey level in an image is decreased .this is illustrated by
reducing the grey level required to represent a 1024*1024,512 image.
The 256,128,and 64 level image are visually identical for all practical
purposes the 32 level images has developed a set of rigid like structure
in areas of smooth grey lines.this effect caused by the user insufficient
number of grey levels in smooth areas of digital image is called a false
contouring.this is visible in images displayed using 16 or lesser gray
level values.
3.Explain about Mach band effect?

Two phenomena demonstrate that perceived brightness is not only a


function of intensity. They are mach band pattern and simultaneous
contrast.
Mach band pattern:
It states that the visual system tends to undershoot or overshoot
around the boundary of regions of different intensities .This is called
mach band pattern. Although the width of the stripe is constant, it is
perceived as if the brightness pattern is strongly scalloped near the
boundaries by darker part.
Simultaneous contrast is related to the fact that a regions perceived
brightness does not depend only on its intensity. In the figure all the
center square have the same intensity however they appear to the eye
as the background gets lighter.
Example: A piece of paper seems white when lying on the desk but
can appear when used to shield the eyes while looking at brighter sky.
4. Explain color image fundamentals.
Although the process followed by the human brain in perceiving and
interpreting color is a physiopsychological phenomenon that is not yet
fully understood, the physical nature of color can be expressed on a
formal basis supported by experimental and theoretical results.
Basically, the colors that humans and some other animals perceive in
an object are determined by the nature of the light reflected from the
object. The visible light is composed of a relatively narrow band of
frequencies in the electromagnetic spectrum. A body that reflects light
that is balanced in all visible wavelengths appears white to the
observer. For example, green objects reflect light with wavelengths
primarily in the 500 to 570 nm range while absorbing most of the
energy at other wavelengths.
Three basic quantities are used to describe the quality of a chromatic
light source: radiance, luminance and brightness. Radiance is the total
amount of energy that flows from the light source, and is usually
measured in watts(W). Luminance, measured in lumens(lm), gives a
measure of the amount of energy an observer perceives from a loght

source. Finally, brightness is a subjective descriptor that is practically


impossible to measure.
5. Explain CMY model.
This model deals about the cyan,magenta and yellow are the
secondary colors of light.When a surface coated with cyan pigment is
illuminated with white light no red lihgt is reflected from the
surface.Cyan subtracts red light from reflected white light,which itself
is composed of equal amounts of red,green and blue light.in this mode
cyan data input or perform an RGB to CMY conversion internally.
C=1-R
M=1-G
Y=1-B
All color values have been normalized to the range [0,1].the light
reflected from a surface coated with pure cyan does not contain red
.RGB values can be obtained easily from a set of CMY values by
subtracting the individual Cmy values from 1.Combining these colors
prosuces a black .When black is added giving rise to the CMYK color
model.This is four coluring printing .
6. Describe the fundamental steps in image processing?
Digital image processing encompasses a broad range of hardware,
software and theoretical underpinnings.

The problem domain in this example consists of pieces of mail and the
objective is to read the address on each piece. Thus the desired output
in this case is a stream of alphanumeric characters.

The first step in the process is image acquisition that is acquire a


digital image .To do so requires an imaging sensor and the capability to
digitize the signal produced by the sensor.
After the digital image has been obtained the next step deals with
preprocessing that image. The key function of this is to improve the
image in ways that increase the chances for success of the other
processes.
The next stage deals with segmentation. Broadly defined segmentation
partitions an input image into its constituent parts or objects. The key
role of this is to extract individual characters and words from the
background,
The output of the segmentation stage usually is raw pixel data,
constituting either the boundary of a region or all the points in the
region itself.
Choosing a representation is only part of the solution for transforming
raw data into a form suitable for subsequent computer processing.
Description also called feature selection deals with extracting features
that result in some quantitative information of interest that are basic
for differentiating one class of object from another.
The last stage involves recognition and interpretation. Recognition is
the process that assigns a label to an object based on the information
provided by its descriptors. Interpretation involves assigning meaning
to an ensemble of recognized objects. Knowledge about a problem
domain is coded into an image processing system in the form of
knowledge database. This knowledge may be simple as detailing
regions of an image where the information of interest is known to be
located thus limiting the search that has to be conducted in seeking
that information.
The knowledge base also can be quite complex such as an interrelated
list of all major possible defects in a materials inspection problem or an
image database containing high resolution satellite images of a region
in connection with change detection application.

Although we do not discuss image display explicitly at this point it is


important to keep in mind that viewing the results of image processing
can take place at the output of any step.
7. Explain the basic Elements of digital image processing:
Five elements of digital image processing,

image acquisitions

storage

processing

communication

display
1)Image acquisition :
Two devices are required to acquire a digital image ,they are
1)physical device:
Produces an electric signal proportional to the amount of light
energy sensed.
2)a digitizer:
Device for converting the electric output into a digital form.
2.storage:
An 8 bit image of size 1024*1024 requires one million bits of
storage.three types of storage:
1.short term storage:
It is used during processing. it is provide by computer memory. it
consisits of frame buffer which can store one or more images and can
be accessed quickly at the video rates.
2.online storage:
It is used for fast recall. It normally uses the magnetic disk,Winchester
disk with100s 0f megabits are commonly used .
3.archival storage:
They are passive storage devices and it is used for infrequent
access.magnetic tapes and optical disc are the media. High density
magnetic tapes can store 1 megabit in about 13 feet of tape .
3)Processing:

Processing of a digital image p involves procedures that are


expressedin terms of algorithms .with the exception of
acquisition and display most image processing

image

functions can be

implemented in software .the need for a specialized hardware is called


increased speed in application. Large scale image processing systems
are still being used for massive image application .steps are being
merge for general purpose small computer equipped with image
processing hardware.
4)communication:
Communication in ip involves local communication between ip systems
and

remote

communication

from

one

point

to

another

in

communication with the transmission of image hardware and software


are available for most of the computers .the telephone line can
transmit a max rate of 9600 bits per second.so to transmit a 512*512,8
bit image at this rate require at last 5 mins.wireless link using
intermediate stations such as satellites are much faster but they are
costly.
5)display:
Monochrome and colour tv monitors are the principal display devices
used in modern ips.monitors are driven by the outputs of the hardware
in the display module of the computer.

8. Explain the Structure of the Human eye

The eye is early a sphere, with an average diameter of


approximately 20 mm. Three membrance encloses the eye,
1. Cornea
2. Sclera or Cornea:
3. Retina
.

The cornea is a tough, transparent tissue that covers the anterior

surface of the eye.


Sclera:
Sclera is an opaque membrance e that encloses the remainder of
the optical globe.
Choroid:

-Choroid directly below the sclera. This membrance contains a


network of blood

vessels that serve as the major

source of nutrition to the eye.


-Choroid coat is heavily pigmented and helps to reduce the
amount

of

extraneous

-The choroid is divided into the ciliary body

and the iris

light entering the eye.


diaphragm.
Lens:
The lens is made up of concentric lay ours of fibrous cells and is
suspended by fibrous that attach to the ciliary body. It contains 60to
70% of water about 60%fat and m ore protein than any other tissue in
the eye.
Retina:
The innermost membrance of the eye is retina, which lines the
inside of the walls entire posterior portion. There are 2 classes of
receptors,
1. Cones
2. Rods
Cones:
The cones in each eye between 6and7 million. They are located
primarily in the central portion of the retina called the fovea, and
highly sensitive to Colour.
Rods:
The number of rods is much larger; some 75 to 150 millions are
distributed over the retinal surface.
Fovea as a square sensor array of size 1.5mm*1.5mm.
9. Explain the RGB model
RGB

model,each

color

appears

in

its

primary

spectral

components of red ,green and blue.This model is based on a Cartesian


coordinate system.This color subspace of interest is the cube.RGB
values are at three corners cyan.magenta and yellow are at three other

corner black is at the origin and white is the at the corner farthest from
the origin this model the gray scale extends from black to white.
Images represented in the RGB color model consist of three
component images,one for each primary colors.The no of bits used to
represented each pixel in which each red,green and blue images is an
8 bit image.Each RGB color pixel of values is said to be 24 bits .The
total no of colors in a 24 bit RGB images is 92803=16777,216.
The acquiring a color image is basically the process is shown in fig,. A
color image can be acquired by using three filters,sensitive to
red,green and blue.When we view a color scene with a monochrome
camera equipped with one of these filters the result is a monochrome
image whose intensity is proportional to the response of that filter.
Repeating this process with each filter produces three monochrome
images that are the RGB component images of the color scene.the
subset of color is called the set of safe RGB colors or the set of all
system safe colors. In inter net applications they are called safe Web
colors or safe browser colors.There are 256 colors are obtained from
different combination but we are using only 216 colors .
10.Descibe the HSI color image model
The HSI Color Model
The RGB,CMY and other color models are not well suited for
describing

colors

in

terms

that

are

practical

for

human

interpretation.For eg,one does not refer to the color of an automobile


by giving the percentage of each of the primaries composing its color.
When humans view a color object we describe it by its hue,
saturation and brightness.

Hue is a color attribute that describes a pure color.

Saturation gives a measure of the degree to which a pure color is


diluted by white light.

Brightness is a subjective descriptor that is practically impossible


to measure. It embodies the achromatic notion of intensity and is
one of the key factors in describing color sensation

Intensity is a most useful descriptor of monochromatic images.

Converting colors from RGB to HSI


Given an image in RGB color format ,

the H component of each RGB pixel is obtained using the


equation
H = {theta
360-theta

if B<=G
if B>G

with theta = cos-1{1/2[R-G) +(R-B)/[(R-G)2 + (R-B)(G-B)]1/2}

The saturation component is given by


S =1-3/(R+G+B)[min(R,G,B)]

the intensity component is given by


I=1/3(R+G+B)

Converting colors from HSI to RGB


Given values of HSI in the interval [0,1],we now want to find the
corresponding RGB values in the same range .We begin by multiplying
H by 360o,which returns the hue to its original range of [0o,360o]
RG sector(0o<=120o).when h is in this sector ,the RGB components
are given by the equations
B = I (1 - S)
R = I [1 + S cos H/cos(60o - H)]
G = 1 - (R + B)
GB Sector(120o <= H < 240o).If the given value of H is in this ,we first
subtract 120o from it
H = H -120o
Then the RGB components are
B = I (1 S)
G = I [1 + S cos H/cos(60o - H)]
B = 1 - (R + G)
BR Sector(240o <=H<=360o).Finally if H is in this range we subtract
240o from it
H = H - 240o
Then the RGB components are

G = I (1 - S)
B = I [1 + S cos H/cos(60o - H)]
R = 1 - (G + B)
11. Describe the basic relationship between the pixels
2-D Mathematical preliminaries

Neighbours of a pixel

Adjacency, Connectivity, Regions and Boundaries

Distance measures

Neighbours of a pixel

A pixel p at coordinates (x,y) has four horizontal and vertical


neighbours whose coordinates are given by
(x+1,y), (x-1,y), (x,y+1), (x,y-1).

This set of pixels, called the 4-neighbours of p, is denoted by


N4(p). Each pixel is a unit distance from (x,y) and some of the
neighbours of p lie outside the digital image if (x,y) is on the
border of the image.

The four diagonal neighbours of p have coordinates


(x+1,y+1), (x+1,y-1), (x-1,y+1), (x-1,y-1)

And are denoted by ND(p). These points together with the 4neighbours are called the 8-neighbours of p, denoted by N8(p).

Adjacency, Connectivity, Regions and Boundaries


Three types of adjacency:

4-adjacency. Two pixels p and q with values from V are 4adjacent if q is in the set N4(p).

8-adjacency. Two pixels p and q with values from V are 8adjacent if q is in the set N8(p).

M-adjacency. Two pixels p and q with values from V are madjacent if


q is in N4(p), or q is in ND(p) and the set N4(p) N4(q) has no
pixels whose values are from V.

A (digital) path (or curve) from pixel p with coordinates (x,y) to


pixel q with coordinates (s,t) is a sequence of distinct pixels with
coordinates
(x0,y0), (x1,y1),................(xn,yn)
Where (x0,y0)= (x,y), (xn,yn)=(s,t) and pixels (xi,yi) and (xi-1,yi-1) are
adjacent for 1<=i<=n. N is the length of the path.

Distance measures

For pixels p,q and z with coordinates (x,y), (s,t) and (v,w)
respectively, D is a distance function or metric if

D(p,q)>=0 (D(p,q)=0 iff p=q),

D(p,q) = D(q,p) and

D(p,z) <= D(p,q) + D(q,z)

The Euclidean distance between p and q is defined as,

De(p,q) = [(x-s)2+(y-t)2]

The D4 distance (also called city-block distance) between p and q


is defined as

D4(p,q) = |x-s|+|y-t|

The D8 distance (also called chessboard distance) between p and


q is defined as

D8(p,q) = max( |x-s|+|y-t|)

2-MARKS QUESTIONS
1.What is the need for transform?
The need for transform is most of the signals or images are time
domain signal (ie) signals can be measured with a function of time.
This representation is not always best. For most image processing
applications anyone of the mathematical transformation are applied to
the signal or images to obtain further information from that signal.
2. What is Image Transform?
An image can be expanded in terms of a discrete set of basis
arrays called basis images. These basis images can be generated by

unitary matrices. Alternatively, a given NxN image can be viewed as an


N^2x1 vectors. An image transform provides a set of coordinates or
basis vectors for vector space.
3. What are the applications of transform?
1) To reduce band width
2) To reduce redundancy
3) To extract feature.
4. Give the Conditions for perfect transform
Transpose of matrix = Inverse of a matrix.
Orthoganality.
5. What are the properties of unitary transform?
1) Determinant and the Eigen values of a unitary matrix have
unity magnitude
2) the entropy of a random vector is preserved under a unitary
Transformation
3) Since the entropy is a measure of average information, this
means information
is preserved under a unitary transformation.
6. Define Fourier transform pair
The Fourier transform of f(x) denoted by F(u) is defined by

F(u)= f(x) e-j2ux dx ----------------(1)


-
The inverse fourier transform of f(x) is defined by

f(x)= F(u) ej2ux dx --------------------(2)


-
The equations (1) and (2) are known as fourier transform pair.
7. Define Fourier spectrum and spectral density

Fourier spectrum is defined as


F(u) = |F(u)| e

j(u)

Where
|F(u)| = R2(u)+I2(u)
(u) = tan-1(I(u)/R(u))
Spectral density is defined by
p(u) = |F(u)|2
p(u) = R2(u)+I2(u)
8. Give the relation for 1-D discrete Fourier transform pair
The discrete Fourier transform is defined by
n-1

F(u) = 1/N

f(x) e

j2ux/N

x=0

The inverse discrete Fourier transform is given by


n-1

f(x) =

F(u) e

j2ux/N

x=0
These equations are known as discrete Fourier transform pair.
9. Specify the properties of 2D Fourier transform.
The properties are

Separability

Translation

Periodicity and conjugate symmetry

Rotation

Distributivity and scaling

Average value

Laplacian

Convolution and correlation

sampling

10. Mention the separability property in 2D Fourier transform


The advantage of separable property is that F(u, v) and f(x, y)
can be obtained by successive application of 1D Fourier transform or
its inverse.
n-1

F(u, v) =1/N F(x, v) e

j2ux/N

x=0

Where
n-1

F(x, v)=N[1/N f(x, y) e

j2vy/N

y=0

11. List the Properties of twiddle factor.


1. Periodicity
WN^(K+N)= WN^K
2. Symmetry
WN^(K+N/2)= -WN^K
12. Give the Properties of one-dimensional DFT
1. The DFT and unitary DFT matrices are symmetric.
2. The extensions of the DFT and unitary DFT of a sequence and
their
inverse transforms are periodic with period N.
3. The DFT or unitary DFT of a real sequence is conjugate
symmetric
about N/2.
13. Give the Properties of two-dimensional DFT
1. Symmetric
2. Periodic extensions
3. Sampled Fourier transform
4. Conjugate symmetry.

14. What is meant by convolution?


The convolution of 2 functions is defined by
f(x)*g(x) = f() .g(x- ) d
where is the dummy variable
15. State convolution theorem for 1D

If f(x) has a fourier transform F(u) and g(x) has a fourier


transform G(u)

then f(x)*g(x) has a fourier transform F(u).G(u).

Convolution in x domain can be obtained by taking the inverse


fourier transform of the product F(u).G(u).

Convolution in frequency domain reduces the multiplication in


the x domain
f(x).g(x) F(u)* G(u)

These 2 results are referred to the convolution theorem.

16. What is wrap around error?


The individual periods of the convolution will overlap and
referred to as

wrap around error

17. Give the formula for correlation of 1D continuous function.


The correlation of 2 continuous functions f(x) and g(x) is
defined by
f(x) o g(x) = f*( ) g(x+ ) d
18. What are the properties of Haar transform.
1. Haar transform is real and orthogonal.
2. Haar transform is a very fast transform
3. Haar transform has very poor energy compaction for images
4. The basic vectors of Haar matrix sequensly ordered.
19. What are the Properties of Slant transform
1. Slant transform is real and orthogonal.
2. Slant transform is a fast transform
3. Slant transform has very good energy compaction for images

4. The basic vectors of Slant matrix are not sequensely ordered.


20. Specify the properties of forward transformation kernel
The forward transformation kernel is said to be separable if g(x,
y, u, v)
g(x, y, u, v) = g1(x, u).g2(y, v)
The forward transformation kernel is

symmetric if

g1 is

functionally equal to g2
g(x, y, u, v) = g1(x, u). g1(y,v)
21. Define fast Walsh transform.
The Walsh transform is defined by
n-1

x-1

w(u) = 1/N f(x) (-1)


x=0

bi(x).bn-1-i (u)

i=0

22. Give the relation for 1-D DCT.

The 1-D DCT is,


N-1
C(u)=(u) f(x) cos[((2x+1)u)/2N] where u=0,1,2,.N-1
X=0
N-1

Inverse f(x)= (u) c(u) cos[((2x+1) u)/2N]

where x=0,1,2,

N-1
V=0

0
0
0
23. Write slant transform matrix -aN
SN.
aN
bN
bN
0

I(N/2-1)

1
0
-bN

I(N/2-1)

0
0

aN
0

-1
bN

S
N/2

aN
I(N/2-1)

-I(N/21)

S
N/2

SN = 1/2

24. Define Haar transform.


The Haar transform can be expressed in matrix form as,
T=HFH
Where F = N X N image matrix
H = N X N transformation matrix
T = resulting N X N transform.
25. Define K-L transform.
Consider a set of n or multi-dimensional discrete signal represented as
column vector x1,x2,xn each having M elements,
X1
X2
X=

.
.
Xn

16-MARKS QUESTIONS
1.Explain

Discrete Fourier Transform in detail.

ID Case
F(u)=1/N
1------------------(1)

N-1f(x)exp[-j2ux/N]

x=0

for

u=0,1,2,.N-

N-1F(u)[j2ux/N],

f(x)=

u=0

for

x=0,1,2,.N-

1--------------------------(2)
Equations (1) and (2) called Discrete Fourier transform pair
The values u=0,1,2,N-1 in the discrete Fourier transform
corresponds to the samples of the continuous transform at values 0,
u, 2u.(N-1)u. In other words F(u) corresponds F(uu). The terms
u and x related by the expression u=1/Nx

2D Case
F(u,v)=1/MN

M-1y=0N-1f(x,y)exp[-j2ux/M+vy/N]

x=0

for u=0,1,2,.M-1, v=0,1,2,..N-1


f(x,y)=x=0M-1y=0N-1F(u,v)exp[j2ux/M+vy/N]
for x=0,1,2,.M-1, y=0,1,2,..N-1
For a square image M=N, FT pair will be
F(u, v)=1/N

N-1y=0N-1f(x,y)exp[-j2(ux +vy)/N]

x=0

for u, v=0,1,2,.N-1
f(x, y)=x=0N-1y=0N-1F(u,v)exp[j2(ux+vy)/N]
for x, y=0,1,2,.N-1
2.Explain the Properties of 2D discrete Fourier Transform
1. Separability
F(u, v)=1/N

N-1y=0N-1f(x,y)exp[-j2(ux +vy)/N] for u, v=0,1,2,

x=0

.N-1
f(x, y)=x=0N-1y=0N-1F(u,v)exp[j2(ux+vy)/N] for x, y=0,1,2,.N1
F(u,v)=1/N x=0N-1F(x,v)exp[-j2ux/N]
where F(x,v)=N[1/Ny=0N-1f(x,y)exp[-j2vy/N
2. Translation
The translation properties of the Fourier Transorm pair are
f(x,y)exp[-j2(u0x +v0y)/N]

F(u-u 0,v-v0) are Fourier Transform

pair.
And f(x-x0,y-y0)

F(u,v)exp[-j2(ux0 +vy0)/N]

Where the double arrow indicates the correspondence between a


function and its Fourier transform.
3. Periodicity and Conjugate Symmetry

Periodicity:
The Discrete Fourier Transform and its inverse are periodic with

period N; that is,


F(u,v)=F(u+N,v)=F(u,v+N)=F(u+N,v+N)

Conjugate symmetry:
If f(x,y) is real, the Fourier transform also exhibits conjugate

symmetry,
F(u,v)=F*(-u,-v) or F(u,v) =F(-u,-v) where F*(u,v) is the
complex conjugate of F(u,v)
4. Rotation
Polar Coordinates x=rcos, y=rsin, u=wsin, v=wsin then f(x,y)
and F(u,v) become f(r,) and F(w,) respectively. Rotating f(x,y) by an
angle 0

rotates F(u,v) by the same angle. Similarly rotating F(u,v)

rotates f(x,y) by the same angle.


i.e, f(r,+ 0)

F(w,+ 0)

5. Distributivity and scaling

Distributivity:
The Discrete Fourier Transform and its inverse are distributive
over addition but not over multiplication.
F[f1(x,y)+f2(x,y)]=F[f1(x,y)]+F[f2(x,y)]
F[f1(x,y).f2(x,y)]F[f1(x,y)].F[f2(x,y)]

Scaling
For the two scalars a and b,
Af(x,y)

aF(u,v) and f(ax,by)

1/abF(u/a,v/b)

6. Laplacian
The Laplacian of a two variable function f(x,y) is defined as
2f(x,y)=2f/x2+2f/y2
7. Convolution and Correlation

Convolution
The convolution of two functions f(x) and g(x) denoted by
f(x)*g(x) and is defined by the integral, f(x)*g(x)= -f()g(x-)d
where is a dummy variable.

Convolution of two functions F(u) and G(u) in the frequency


domain =multiplication of their inverse f(x) and g(x) respectively.
Ie, f(x)*g(x)

F(u)G(u)

Correlation
The correlation of two functions f(x) and g(x) denoted by
f(x)g(x)

and

is

defined

by

the

integral,

f(x)g(x)= -

f*()g(x+)d where is a dummy variable.

For the discrete case fe(x)ge(x)= 1/M


fe(x)=

, AxM-1

ge(x)=

{g(x), 0xB-1,

{0

M-1f*(m)g(x+m)

{f(x), 0xA-1,
{0

3.Discuss

M=0

, BxN-1

Hadamard transform in detail

ID Hadamard transform

H(u)=1/N
=

N 1
x 0

f(x) (-1)

n 1

i 0

b (x) b (u)
i
i

n 1
x 0

f (x) g (x , u )

where g (x, u)= 1/N (-1)

n 1
i 0

bi(x) bi(u) which is known as 1D

forward Hadamard kernel.


bk(x) is the kth bit binary representation of z
bi (z)=1

Inverse 1D Hadamard Transform

f(x ) =

N 1

N 1

H(u)

n 1
i0

bi(x) bi(u)

u 0

f(x ) =

u 0

H(u) h(x,u)

x=0,1.N-1

h(x,u) = (-1)

n 1

bi(x) bi(u)

i 0

2D Hadamard Transform

H(u,v) = 1/N
=

n 1
x 0

x 0

= 1/N

f(x,y)=

y 0

f(x , y) (-1)

n 1
i0

b (x) b (u) + b (y) b (v)


i
i
i
i

f(x , y) g(x,y,u,v)

y 0

n 1

u 0

n 1
b (x) b (u) + b (y) b (v)
i
i
i
i

i 0

n 1

u 0

N 1

N 1

where g (x, y ,u , v)= 1/N (-1)

Similarly ,f(x,y)=

n 1

N 1

H(u.v) (-1)

v 0

n 1

i 0

b (x) b (u) + b (y) b (v)


i
i
i
i

N 1

v 0

Where h( x , y , u , v) =

H (u.v) h(x, y , u , v)

1/N (-1)

n 1

i 0

[ b (x) b (u)
i
i

+ b (y) b (v)
i
i

] which is the

inverse kernel Therefore ,forward and reverse kernel are same

Ordered Hadamard Transform


1D Ordered Hadamard Transform
H(u)=1/N
=

N 1
x 0

f(x) (-1)

n 1

i 0

b (x) p (u)
i
i

n 1
x 0

f (x) g (x , u )

where g (x, u)= 1/N (-1)

n 1
i 0

bi(x) pi(u)

po(u) = b (n-1) (u)


p1 (u) = b (n-1) (u) + b (n-2) (u)
p2 (u) = b (n-2) (u) + b (n-3) (u)
.
.
.
.
pn-1 (u) = b1(u) + b o (u)

Inverse Hadamard Transform

f(x)=

N 1

N 1

H(u) (-1)

n 1
i0

bi(x) pi(u)

u 0

f(x)=

H(u) h(x,u)

u 0

where h (x, u)= (-1)

n 1
i0

bi(x) pi(u)

2D ordered HT Pair

H(u,v) = 1/N

n 1
x 0

N 1
y 0

f(x , y) (-1)

n 1
i0

[bi(x) pi(u) +

bi(y) pi(v)]
=

n 1
x 0

N 1
y 0

f(x , y) g(x,y,u,v)

where g (x, y ,u , v)= 1/N (-1)

n 1

[bi(x) pi(u) + bi(y)

i 0

pi(v)]

Similarly ,f(x,y)= = 1/N

n 1

u 0

N 1

H(u.v) (-1)

v 0

n 1
i0

[bi(x) pi(u) + bi(y)

pi(v)]

f(x,y)= =

n 1

u 0

N 1

v 0

H (u.v) h(x, y , u , v)

Where h( x , y , u , v) = 1/N (-1)

n 1

i0

[ bi(x) pi(u) + bi(y) pi(v)]

4. Explain Discrete cosine transform in detail


The discrete cosine transform(DCT) gets its name from the fact
that the rows of the N*N transform matrix C are obtained as a function
of cosines.
( 2 j 1)i
2N
|C|i,j= 1 / N cos
i=0,j=0,1,....N-1

( 2 j 1)i
2 / N cos
2N

i=1,2.....N-1,j=0,1,....N-1

The rows of the transform matrix are shown in graphical form.


Here the amount of variation increases as we progress down the row;
that is the frequency of the rows increases as we go from top to
bottom.

6)
0)
3)

7)
1)
4)

2)
5)

Fig .Basic set of the discrete cosine transform. The numbers correspond
to the row of the transform matrix.
Also, the basis matrices show increased variation as we go from
the top-left matrix, corresponding to the

00

coefficient, to the bottom-

(N-1)(N-1) coefficient.

right matrix, corresponding to the

The DCT is closely related to the discrete Fourier transform(DFT)


and DCT can be obtained from DFT. In terms of compression, the DCT
performs better than the DFT.
In DFT, to find the Fourier coefficients for s sequence of length N,
we assume that the sequence is periodic with period N. The DFT
assumes that the sequence outside the interval behaves in a different
manner. This introduces sharp discontinuities, at the beginning and the
end of the sequence. In order to represent these sharp discontinuities,
the DFT needs nonzero coefficients for the high-frequency components.
Because these components are needed only at the two end points of
the sequence, their effect needs to be cancelled out at other points in
the sequence. Thus, the DFT adjusts other coefficients accordingly.
When

we

discard

the

high

frequency

coefficients

during

the

compression process, the coefficients that were cancelling out the


high-frequency effect in other parts of the sequence result in the
introduction of additional distortion.
The DCT can be obtained using the DFT by mirroring the original
N-point sequence to obtain a 2N-point sequence. The DCT is simply the
first N points of the resulting 2N-point DFT. When we take the DFT of
the 2N-point mirrored sequence, we again have to assume periodicity.
Here it does not introduce any sharp discontinuities at the edges.

The DCT is better at energy compaction for most correlated


sources when compared to the DFT. For Markov sources with high
correlation coefficient ,

E[ xnxn 1]
E[ xn 2] ,

The compaction ability of the DCT is very close to that of the KLT. As
many sources can be modelled as Markov sources with high values for

, this superior compaction ability has made the DCT the most popular
transform. It is a part of many international standards, including
JPEG,MPEG and CCITT H.261.

2-MARKS QUESTIONS
1.

Specify the objective of image enhancement technique.


The objective of enhancement technique is to process an image so
that the result is more suitable than the original image for a particular
application.

2.

List the 2 categories of image enhancement.

Spatial domain refers to image plane itself & approaches in this


category are based on direct manipulation of picture image.

Frequency domain methods based on modifying the image by


fourier transform.

3.

What is the purpose of image averaging?


An important application of image averaging is in the field of
astronomy, where imaging with very low light levels is routine, causing
sensor noise frequently to render single images virtually useless for
analysis.

4.

What is meant by masking?

Mask is the small 2-D array in which the values of mask coefficient determines the nature of process.

The enhancement technique based on this type of approach is


referred to as mask processing.

5.Give the mask used for high boost filtering.

-1

-1

A+4

-1

-1

-1

-1

-1 -1

A+8

-1

-1

-1

-1

6. What is a Median filter?


The median filter replaces the value of a pixel by the median of
the gray levels in the neighborhood of that pixel.
7. What is maximum filter and minimum filter?
The 100th percentile is maximum filter is used in finding brightest
points in an image. The 0th percentile filter is minimum filter used for
finding darkest points in an image.
8. Write the application of sharpening filters
1.

Electronic

printing

and

medical

imaging

application
2. Autonomous target detection in smart weapons.
9. Name the different types of derivative filters
1. Perwitt operators
2. Roberts cross gradient operators
3. Sobel operators
10. What is meant by Image Restoration?

to

industrial

Restoration attempts to reconstruct or recover an image that has


been degraded by using a clear knowledge of the degrading
phenomenon.
11. What are the two properties in Linear Operator?

Additivity

Homogenity

12. Give the additivity property of Linear Operator


H[f1(x,y)+f2(x,y)]=H[f1(x,y)]+H[f2(x,y)]
The additive property says that if H is the linear operator,the
response to a sum of two is equal to the sum of the two responses.
13. How a degradation process is modeled?

(x,y)
H
f(x,y)

g(x,y)

A system operator H, which together with an additive white noise


term (x,y) a operates on an input image f(x,y) to produce a degraded
image g(x,y).
14.Give the homogenity property in Linear Operator
H[k1f1(x,y)]=k1 H[f1(x,y)]
The

homogeneity property says that,the response to a

constant multiple of any input is equal to the response to that input


multiplied by the same constant.

15. Give the relation for degradation model for continuous


function
g(x,y) =-f(,)(x-,y-).dd+(x,y)
16. What is Fredholm integral of first kind?

g(x,y) = f(,)h(x,,y,)d d

which is called the superposition or convolution or fredholm integral of


first kind. It states that if the response of H to an impulse is known, the
response to any input f(,) can be calculated by means of fredholm
integral.
17. What is the concept algebraic approach?
The concept of algebraic approach is to estimate the original
image which minimizes a predefined criterion of performances
18. What are the two methods of algebraic approach?

Unconstrained restoration approach

Constrained restoration approach

19. Define Gray-level interpolation


Gray-level interpolation deals with the assignment of gray levels
to pixels in the spatially transformed image
20. What is meant by Noise probability density function?
The spatial noise descriptor is the statistical behavior of gray
level values in the noise component of the model.
21. Why the restoration is called as unconstrained restoration?
In the absence of any knowledge about the noise n, a
meaningful criterion function is to seek an f^ such that H f^
approximates of in a least square sense by assuming the noise term is
as small as possible.

Where H = system operator.


f^ = estimated input image.
g = degraded image.
22. Which is the most frequent method to overcome the
difficulty to formulate the spatial relocation of pixels?
The point is the most frequent method, which are subsets of
pixels whose location in the input (distorted) and output (corrected)
imaged is known precisely.
23. What are the three methods of estimating the degradation
function?
1. Observation
2. Experimentation
3. Mathematical modeling.
24. What are the types of noise models?

Guassian noise

Rayleigh noise

Erlang noise

Exponential noise

Uniform noise

Impulse noise

25. Give the relation for guassian noise


Guassian noise:
The PDF guassian random variable Z is given by
P(Z)=e-(Z-)2/22/2
Z->Gray level value
->standard deviation
2->varianze of Z
->mean of the graylevel value Z

You might also like