0% found this document useful (0 votes)
9 views

PPT-1

Uploaded by

singhjapjyot8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

PPT-1

Uploaded by

singhjapjyot8
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

DIGITAL IMAGE

PROCESSING
CONTENT
➢Introduction
➢Applications
➢History
Will Discuss:
1. Introduction to various Image processing techniques and their
applications
2. Details of different image processing algorithms

WHAT DOES DIGITAL PROCESSING MEAN?

Processing Image Digital

Processing of images which are digital in nature by a digital computer


WHY DO WE NEED TO PROCESS THE IMAGES?

It is motivated by three major applications:

1. Improvement of pictorial information for human perception


(Enhance the quality of image)
2. Image processing for autonomous machine application (Industry
quality control, in assembly automation etc.)
3. Efficient storage and transmission (to make it efficient for
transmitting over low bandwidth communication channel)
HUMAN PERCEPTION APPLICATIONS

Employ Methods capable of enhancing pictorial information for human


interpretation and analysis
Typical Applications:

1. Noise Filtering (Remove noise from the picture)


2. Content Enhancement
- Contrast Enhancement
- Deblurring (due to camera setting, picture from moving platform
(moving car/moving train), lens is not focused)
3. Remote Sensing (for aerial images which are taken from a satellite)
4. Medical Imaging (CT scan machines for brain tumor: location and size
of tumor; Mammogram images for cancer detection tissues; Ultra
sonogram to study the growth of a baby)
[1]. NPTEL lecture
[1]. NPTEL lecture
Less information is available in low contrast image, say for example river
outline is not clearly visible
[1]. NPTEL lecture
[1]. NPTEL lecture
[1]. NPTEL lecture
CT scan images for detection of brain tumor: location and size of tumor
[1]. NPTEL lecture
Mammogram images for detection of cancer tissues
[1]. NPTEL lecture
We can study whether the river has changed its path, growth of
vegetable over a certain region, pollution in that area, Build a new
city etc.
[1]. NPTEL lecture
What is the extent of fire and in which direction the fire is moving.
[1]. NPTEL lecture
Cloud formation in certain region; rain possibility or storm possibility
DIP useful
[1]. NPTEL lecture
Shows the formation of ozone hole: unwanted ray can enter the
earth surface through that hole
MACHINE VISION APPLICATIONS

Here the interest is on procedures for extraction of image information


suitable for computer processing

Typical Applications:

1. Industrial machine vision for product assembly and inspection


2. Automated target detection and tracking
3. Finger print recognition
4. Machine processing of aerial and satellite imagers for weather
prediction and crop assessment etc.
[1]. NPTEL lecture
Checking the quality of the product: detection of fully filled or empty
of partially filled bottles
IMAGE CONTAINS TWO TYPES OF ENTITIES:

1. Information content of the image


2. Redundancy (process the image and try to remove the redundancy
present in the image retain only the information present in the image.

IMAGE COMPRESSION APPLICATION (to reduce the size of image)

An image usually contains lots of redundancy that can be exploited to


achieve compression

Three types of Redundancy


1. Pixel Redundancy
2. Coding Redundancy
3. Psychovisual Redundancy

Applications: Reduced storage


Reduction in bandwidth
[1]. NPTEL lecture (Pixel Redundancy)
➢ Look at the blue circular region: in this region the intensity of image is
more or less uniform. This means if we know the intensity of an particular
point in the region, we can predict what is the intensity of its neighbouring
points.
➢ If prediction is possible then why do we have to store all those image
points.
➢ Hence We will store one point and its neighbourhood can be predicted
using prediction mechanism. In this way same information stored in much
lower space.
➢ Non uniform region: eyes, hat boundary, hair etc.
[1]. NPTEL
Same visual quality of original image = 1:55 ( compressed by 55 times)
Different visual quality of original image = 1:156 (compressed by 156 times, no. of
blocked region, blocking artifacts)
Original Image = retains both redundancy and information
Image 2 = removed redundancy and retains only the information
Image 3 = removed redundancy and removed some of the information

LOSSY COMPRESSION: It will remove both redundancy and some of the


information present in the image. After this the quality of reconstructed image is still
accepted. But it will not be as the original image. This is taken care by the “rate
distortion theorem”.

Let, Original Image size = 256 X 256 bytes = 64 KBytes


Image 2 = 10 KBytes
Image 3 = 500 Bytes

So, space reduction is controlled by this factor.


HISTORY OF DIGITAL IMAGE POCESSING:

In 1920s, submarine cables were used to transmit digitized newspaper pictures


between London and New York named as “Bartlane Systems” (less than three
hours).
Transmitter Receiver
(Digitized Submarine (Reproduction of
newspaper Cables pictures by
pictures) telegraphic printers)
London New York
Bartlane cable picture transmission system

FIGURE 1.1 A digital picture produced in 1921 from a coded tape by a telegraph
printer with special typefaces.
In 1921, printing procedure was changed to photographic reproductions from tapes
perforated at telegraph receiving terminals. This improved both in tonal quality and
in resolution. It had five distinct levels of gray.

Reproduction of pictures by telegraphic


printers Reproduction of pictures by photographic
printers
After 1929, Bartlane systems was capable of coding 15 distinct brightness levels of
gray.

Figure. Unretouched cable picture of Generals Pershing (right) and Foch,


transmitted in 1929 from London to New York by 15-tone equipment.

For next 35 years, Improvement of processing techniques continued.


In 1940s, with the introduction by John von Neumann of two key concepts: (1) a
memory to hold a stored program and data, and (2) conditional branching. These
two ideas are the foundation of a central processing unit (CPU), which is at the
heart of computers today. Starting with von Neumann, there were a series of key
advances that led to computers powerful enough to be used for digital image
processing.
In 1964, computer processing techniques were used, when pictures of the moon
transmitted by Ranger 7 (a U.S. spacecraft) were processed by a computer to
correct various types of image distortion inherent in the on-board television
camera. This was the basis of modern image processing techniques.

In the late 1960s and early 1970s, to be used in medical imaging, remote Earth
resources observations, and astronomy.
CONTENT
➢ What is an Image?
➢ Digital image vs Analog image
➢ Image representation
➢ Image formation model
➢ Advantages and Disadvantages of Digital image
Image:

It’s a projection of 3D scene in 2D plane


Y Y

X X

Z
IMAGE ACQUISITION USING SENSOR ARRAYS:
IMAGE REPRESENTATION:
➢ An image is a 2-D Light intensity function f(x,y). A digital image f(x,y) is
discretized both in spatial coordinates and brightness.

➢ A two dimensional function f(x,y), where (x,y) are the spatial (plane)
coordinates, and the amplitude of f at any particular coordinates (x,y) is called
the intensity of gray level of the image at that point.

➢ It can be considered as a matrix whose row, column indices specify a point in


the image and the element value identifies gray level value at that point. These
elements are called as pixels/ pels/ picture elements/ Image elements.

N (Columns)

f(x,y) = r(x,y) * i(x,y)

M (rows) x= 0,1,2,3,4…..M-1
y=0,1,2,3,4……N-1
A SIMPLE IMAGE FORMATION MODEL:

f(x,y) = r(x,y) * i(x,y)


Where
r(x,y) = reflectivity of the surface of the corresponding image point
i(x,y) = represents the intensity of the incident light
f(x,y) = product of reflectivity and intensity

reflectance is bounded by 0 (total absorption) and 1 (total reflectance).


Black =0; Gray = 0.5; White = 1
Coordinate Systems:
You can specify locations in images using various coordinate systems

Types:
1. Pixel coordinate system
2. Spatial coordinate systems

Coordinates in pixel and spatial coordinate systems relate to locations in an


image.
(1). Pixel coordinate system: For pixel coordinates, the number of rows, r,
downward, while the number of columns, c, increase to the right. Pixel
coordinates are integer values and range from 1 to the length of the row or
column.
(2). Spatial Coordinates: Spatial coordinates enable you to specify a location in an
image with greater granularity than pixel coordinates.

➢ Such as, in the pixel coordinate system, a pixel is treated as a discrete unit,
uniquely identified by an integer row and column pair, such as (3,4).

➢ In the spatial coordinate system, locations in an image are represented in terms


of partial pixels, such as (3.3, 4.7).
(1). Pixel coordinate system (2). Spatial Coordinates
Types:
1. Analog Image
2. Digital Image

Analog Image:

(a). A two dimensional function f(x,y), where (x,y) are the spatial coordinates and
f at any particular coordinates (x,y) is called the intensity of gray level of the
image at that point.

(b). When above mathematical representation has continuous range of values


representing position and intensity, image is analog image.

Ex: Image produced on the screen of a CRT monitor.

A cathode-ray tube (CRT) is a vacuum tube containing one or more electron


guns, which emit electron beams that are manipulated to display images on
a phosphorescent screen. The images may represent electrical waveforms on
an oscilloscope, a frame of video on an analog television set (TV), digital raster
graphics on a computer monitor, or other phenomena like radar targets.
Digital Image:

(a). A two dimensional function f(x,y), where (x,y) are the spatial coordinates and f
at any particular coordinates (x,y) is called the intensity of gray level of the image at
that point.

(b). When x, y, and f are all finite and discrete quantities, image is a digital image.

Analog
Sampling
Image

Digital
Quantization
Image

➢ Contains finite number of elements, each of which has a particular location


and value.

➢ These elements are called picture elements, image elements, pels or pixels.
Advantages:
➢ Fast processing, cost effective, effective storage, efficient transmission, scope
for versatile image manipulations.

➢ Humans are limited to the visual band of the electromagnetic (EM) spectrum.
However, imaging machines cover almost the entire EM spectrum, ranging from
gamma to radio waves. Thus operate on images generated by sources that
humans are not capable to sense. These include ultrasound, electron microscopy,
and computer generated images.

Disadvantages:
➢ High memory for good quality images and hence requires fast processor.
Electromagnetic (EM) spectrum
➢ Gamma Rays: Nuclear Medicine and astronomical observations, bone scan
X-Rays: Medical diagnostics and Astronomy
And so on
➢ These include ultrasound, electron microscopy, and computer-generated
images. Thus, digital image processing encompasses a wide and varied field of
applications
➢ .The principal energy source for images in use today is the electromagnetic
energy spectrum.
FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
It is helpful to divide the material of DIP into the two broad categories

➢ methods whose input and output are images,


➢ methods whose inputs may be images, but whose outputs are attributes
extracted from those images.
Image acquisition is the first process: Acquisition could be as simple as being
given an image that is already in digital form. Generally, the image acquisition
stage involves preprocessing, such as scaling.

Image enhancement is the process of manipulating an image so the result is


more suitable than the original for a specific application (problem oriented).

Image restoration is an area that also deals with improving the appearance of an
image. However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on
mathematical or probabilistic models of image degradation.
Color image processing Color is used also as the basis for extracting features of
interest in an image

Wavelets are the foundation for representing images in various degrees of


resolution. based mostly on the Fourier transform. Multiresolution processing.

Compression, as the name implies, deals with techniques for reducing the storage
required to save an image, or the bandwidth required to transmit it.

Morphological processing deals with tools for extracting image components that
are useful in the representation and description of shape. The material in this
chapter begins a transition from processes that output images to processes that
output image attributes,
Segmentation partitions an image into its constituent parts or objects. In general,
the more accurate the segmentation, the more likely automated object classification
is to succeed.

Feature extraction consists of feature detection and feature description.

Feature extraction almost always follows the output of a segmentation stage, which
usually is raw pixel data, constituting either the boundary of a region (i.e., the set of
pixels separating one image region from another) or all the points in the region
itself.

Image pattern classification is the process that assigns a label (e.g., “vehicle”) to
an object based on its feature descriptors.

Different “classical” approaches such as minimum-distance, correlation, and


Bayes classifiers, to more modern approaches implemented using deep
convolutional neural networks,
Knowledge about a problem domain is coded into an image processing system in
the form of a knowledge database.

This knowledge may be as simple as detailing regions of an image where the


information of interest is known to be located, thus limiting the search that has to
be conducted in seeking that information.

In addition to guiding the operation of each processing module, the knowledge


base also controls the interaction between modules.
Typical General-Purpose system used for DIP
Or
COMPONENTS OF AN IMAGE PROCESSING SYSTEM:
Image Sensor-It has two subsystems: To acquire digital images

(1). Physical sensor that responds to the energy radiated by the object we wish
to image

(2). Digitizer, is a device for converting the output of the physical sensing device
into digital form.

Specialized image processing hardware usually consists of the digitizer just


mentioned, plus hardware that performs other primitive operations, such as an
arithmetic logic unit (ALU), that performs arithmetic and logical operations in
parallel on entire images.

This type of hardware sometimes is called a Front-end Subsystem, ans its most
distinguishing characteristics is speed.

This unit performs functions that require fast data throughputs that the typical
main computer cannot handle.
The computer in an image processing system is a general-purpose computer and
can range from a PC to a supercomputer.

Image Processing Software for image processing consists of specialized modules


that perform specific tasks, for example MATLAB

Mass storage is a must in image processing applications.

An image of size 1024 1024 × pixels, in which the intensity of each pixel is an 8-bit
quantity, requires one megabyte of storage space if the image is not compressed.

Three categories:
1. Short-term storage for use during processing
2. On-line storage for relatively fast recall
3. Archival storage characterized by infrequent access.
Image displays in use today are mainly color, flat screen monitors.

Hardcopy devices for recording images include laser printers, film cameras,
heatsensitive devices, ink-jet units, and digital units, such as optical and CD-ROM
disks.

Networking and cloud communication are almost default functions in any computer
system in use today

Key consideration is Bandwidth

You might also like