PPT-1
PPT-1
PROCESSING
CONTENT
➢Introduction
➢Applications
➢History
Will Discuss:
1. Introduction to various Image processing techniques and their
applications
2. Details of different image processing algorithms
Typical Applications:
FIGURE 1.1 A digital picture produced in 1921 from a coded tape by a telegraph
printer with special typefaces.
In 1921, printing procedure was changed to photographic reproductions from tapes
perforated at telegraph receiving terminals. This improved both in tonal quality and
in resolution. It had five distinct levels of gray.
In the late 1960s and early 1970s, to be used in medical imaging, remote Earth
resources observations, and astronomy.
CONTENT
➢ What is an Image?
➢ Digital image vs Analog image
➢ Image representation
➢ Image formation model
➢ Advantages and Disadvantages of Digital image
Image:
X X
Z
IMAGE ACQUISITION USING SENSOR ARRAYS:
IMAGE REPRESENTATION:
➢ An image is a 2-D Light intensity function f(x,y). A digital image f(x,y) is
discretized both in spatial coordinates and brightness.
➢ A two dimensional function f(x,y), where (x,y) are the spatial (plane)
coordinates, and the amplitude of f at any particular coordinates (x,y) is called
the intensity of gray level of the image at that point.
N (Columns)
M (rows) x= 0,1,2,3,4…..M-1
y=0,1,2,3,4……N-1
A SIMPLE IMAGE FORMATION MODEL:
Types:
1. Pixel coordinate system
2. Spatial coordinate systems
➢ Such as, in the pixel coordinate system, a pixel is treated as a discrete unit,
uniquely identified by an integer row and column pair, such as (3,4).
Analog Image:
(a). A two dimensional function f(x,y), where (x,y) are the spatial coordinates and
f at any particular coordinates (x,y) is called the intensity of gray level of the
image at that point.
(a). A two dimensional function f(x,y), where (x,y) are the spatial coordinates and f
at any particular coordinates (x,y) is called the intensity of gray level of the image at
that point.
(b). When x, y, and f are all finite and discrete quantities, image is a digital image.
Analog
Sampling
Image
Digital
Quantization
Image
➢ These elements are called picture elements, image elements, pels or pixels.
Advantages:
➢ Fast processing, cost effective, effective storage, efficient transmission, scope
for versatile image manipulations.
➢ Humans are limited to the visual band of the electromagnetic (EM) spectrum.
However, imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. Thus operate on images generated by sources that
humans are not capable to sense. These include ultrasound, electron microscopy,
and computer generated images.
Disadvantages:
➢ High memory for good quality images and hence requires fast processor.
Electromagnetic (EM) spectrum
➢ Gamma Rays: Nuclear Medicine and astronomical observations, bone scan
X-Rays: Medical diagnostics and Astronomy
And so on
➢ These include ultrasound, electron microscopy, and computer-generated
images. Thus, digital image processing encompasses a wide and varied field of
applications
➢ .The principal energy source for images in use today is the electromagnetic
energy spectrum.
FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
It is helpful to divide the material of DIP into the two broad categories
Image restoration is an area that also deals with improving the appearance of an
image. However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on
mathematical or probabilistic models of image degradation.
Color image processing Color is used also as the basis for extracting features of
interest in an image
Compression, as the name implies, deals with techniques for reducing the storage
required to save an image, or the bandwidth required to transmit it.
Morphological processing deals with tools for extracting image components that
are useful in the representation and description of shape. The material in this
chapter begins a transition from processes that output images to processes that
output image attributes,
Segmentation partitions an image into its constituent parts or objects. In general,
the more accurate the segmentation, the more likely automated object classification
is to succeed.
Feature extraction almost always follows the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region (i.e., the set of
pixels separating one image region from another) or all the points in the region
itself.
Image pattern classification is the process that assigns a label (e.g., “vehicle”) to
an object based on its feature descriptors.
(1). Physical sensor that responds to the energy radiated by the object we wish
to image
(2). Digitizer, is a device for converting the output of the physical sensing device
into digital form.
This type of hardware sometimes is called a Front-end Subsystem, ans its most
distinguishing characteristics is speed.
This unit performs functions that require fast data throughputs that the typical
main computer cannot handle.
The computer in an image processing system is a general-purpose computer and
can range from a PC to a supercomputer.
An image of size 1024 1024 × pixels, in which the intensity of each pixel is an 8-bit
quantity, requires one megabyte of storage space if the image is not compressed.
Three categories:
1. Short-term storage for use during processing
2. On-line storage for relatively fast recall
3. Archival storage characterized by infrequent access.
Image displays in use today are mainly color, flat screen monitors.
Hardcopy devices for recording images include laser printers, film cameras,
heatsensitive devices, ink-jet units, and digital units, such as optical and CD-ROM
disks.
Networking and cloud communication are almost default functions in any computer
system in use today