0% found this document useful (0 votes)
8 views

03 Digitization

Uploaded by

Lloyd Vegafria
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

03 Digitization

Uploaded by

Lloyd Vegafria
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

DIGITIZATION

By: VERA A. PANAGUITON


OUTLINE

 Image Acquisition
 Image Sensors
 Digitization
 Sampling
 Quantization
General Steps in image processing

 Importing the image via image acquisition tools;


 Analyzing and manipulating the image;
 Output in which result can be altered image or report
that is based on image analysis.
Acquisition of Images
The images are generated by the combination of an illumination
source and the reflection or absorption of energy from that source by
the elements of the scene being imaged.
Imaging sensors are used to transform the illumination energy into
digital images.

© 2002 R. C. Gonzalez & R. E. Woods


Types of Image Sensors

© 2002 R. C. Gonzalez & R. E. Woods


Image Acquisition Using a Single Sensor
Image Acquisition Using Sensor Strips

Image acquisition using a linear sensor strip

Image acquisition using a circular sensor strip.


Image Acquisition Process
Ultrasound Imaging

Ultrasonic spectrum

Ultrasound image
acquisition device
Ultrasonic Baby image
during pragnancy
Sensors used in Image Capturing
Image sensors are used to detect and conveys information that used to make an
image. When the image sensor works, it converts the variable attenuation of light
waves( light pass through objects or reflects off by objects) or electromagnetic
radiation into signals, small bursts of current that convey information. These
resultant electrical signals can be viewed, analyzed or stored.
Image sensors are a solid-state device and serve as one of the most important
components inside a machine vision camera. Image sensors can be classified
according to several criteria as follows:
 Structure type — CCD (Charged Coupled Device) or CMOS (Complementary
Metal Oxide Semiconductor)
 Chroma type — Color or Monochromatic
 Shutter type — Global shutter or Rolling shutter
 Other than these criteria, image sensors can also be classified according to
the resolution, frame rate, pixel size sensor format.
How typical image sensor works inside a
 In a camera system, through a lens or other optics, the image sensor will receive incident light
which are photons. Then if the sensor camera
is CCD (Charged Coupled Device) it will transfer
information into a voltage or if the sensor is CMOS (Complementary Metal Oxide
Semiconductor) it will transform information into a digital signal. CMOS sensors convert
photons into electrons, then to a voltage, and then into a digital value using an on-chip Analog
to Digital Converter (ADC).
 Different camera manufacturers use different general layouts and components in the camera.
The main purpose of this layout is to convert light into a digital signal which can then be
analyzed to trigger some future action. Consumer level cameras would have additional
components for image storage (memory card), viewing (embedded LCD) and control knobs
and switches that machine vision cameras do not.
 Typical sensor functions involve:
How a digital image is formed.
Since capturing an image from a camera is a physical process. The sunlight
is used as a source of energy. A sensor array is used for the acquisition of
the image. So when the sunlight falls upon the object, then the amount of
light reflected by that object is sensed by the sensors, and a continuous
voltage signal is generated by the amount of sensed data.
The output of most of the image sensors is an analog signal, to create an
image which is digital, we need to convert continuous data into digital
form.
Analog to Digital
Since, most of the output of image sensors is in the form of analog signal and digital
image processing and its techniques cannot be applied in this form because the output of the
image sensors requires infinite memory to store a signal that can have infinite values.
To create a digital image, the continuous data is converted into digital form. An image
function f(x , y) must be digitized both spatially and in amplitude. The conversion from analog
to digital involves two processes: sampling and quantization.
Sampling :
Digitizing the co-ordinate value (x axis).
The sampling rate determines the spatial
resolution of the digitized image
Quantization :
Digitizing the amplitude value (y axis).
Quantization level determines the number Analog to digital conversion
of grey levels in the digitized image
Digitalization

Values of pixels

Continuous image Result of image


projected onto a sampling and
sensor array. quantization.
Sampling
In digital image processing, sampling is the reduction of a continuous-time signal to a discrete-
time signal. Sampling can be done for functions varying in space, time or any other dimension
and similar results are obtained in two or more dimensions.

Sampling takes two forms: Spatial and temporal.


Spatial sampling is essentially the choice of 2D resolution of an image
Temporal sampling is the adjustment of the exposure time of the CCD.

Oversampling is used for zooming. The difference between sampling and zooming is that sampling is done
on signals while zooming is done on the digital image.
Sampling
Reduction in Sampling Resolution
Two possibilities
 Downsampling

 Decimation
Effects in Reducing Spatial Resolution
 Ugly contours (by steps)
 Blur effect
 Details are less precise/detectable
 Resolution loose
Spatial Resolution Effects

256 512 1024

128 64 32
Increase in Sampling Resolution
Interpolation (sometimes called resampling)
— Process of using known data to estimate unknown values
— It refers to the “guess” of intensity values at missing locations, i.e., x and y
can be arbitrary. (e.g., zooming, shrinking, rotating, and geometric correction)
— an imaging method to increase (or decrease) the number of pixels in a digital
image.

Note: Some digital cameras use interpolation to produce a larger image than the sensor captured or to create digital zoom.
Image Interpolation
Image interpolation works in two directions, and tries to achieve a
best approximation of a pixel's color and intensity based on the
values at surrounding pixels. The following examples illustrates how
resizing / enlargement works:
Common Interpolation Method

 Nearest neighbor
interpolation
 Bilinear interpolation
 Bicubic interpolation
Nearest Neighbor Interpolation
Nearest neighbor is the most basic and requires the least processing time
of all the interpolation algorithms because it only considers one pixel —
the closest one to the interpolated point. This has the effect of simply
making each pixel bigger.
Bilinear Interpolation
 Bilinear interpolation considers the closest
2x2 neighborhood of known pixel values
surrounding the unknown pixel. It then
takes a weighted average of these 4 pixels
to arrive at its final interpolated value. This
results in much smoother looking images
than nearest neighbor.
 The image to the right is for a case when all
known pixel distances are equal, so the
interpolated value is simply their sum
divided by four.
Bicubic Interpolation
 Bicubic goes one step beyond bilinear by
considering the closest 4x4 neighborhood of
known pixels — for a total of 16 pixels. Since
these are at various distances from the
unknown pixel, closer pixels are given a
higher weighting in the calculation.
 Bicubic produces noticeably sharper images
than the previous two methods, and is
perhaps the ideal combination of processing
time and output quality. For this reason it is
a standard in many image editing programs
(including Adobe Photoshop), printer drivers
and in-camera interpolation.
Interpolation Example
Higher Order Interpolation: Spline & Sinc

 There are many other interpolators which take more surrounding


pixels into consideration, and are thus also much more
computationally intensive.
 The algorithms include spline and sinc, and retain the most image
information after an interpolation. They are therefore extremely
useful when the image requires multiple rotations / distortions in
separate steps. However, for single-step enlargements or rotations,
these higher-order algorithms provide diminishing visual
improvement as processing time is increased.
Quantization
 Quantization is opposite to sampling because it is done on “y axis”
while sampling is done on “x axis”.
 This defines the number of possible intensity/color values that a pixel
may have and relates to the quantization of the image information.
 Example: binary is 2 bit, grey-scale is 8 bit and color (most commonly)
is 24 bit.
Effects of Reducing the number of intensity levels
 False contours appear
 Quantification noise
 Visible (eyes) effect under 6/7 bits
 Quantification for the display: 8 bits

Number of intensity levels typically is an integer power of two (often


256 : 1 byte = 8 bits per pixel), the discrete levels are equally spaced.
Quantization Example
Decrease in Quantization Levels

256 128 16 8

64 32 4 2
Resolution

 Digital image implies the discretization of both spatial and


intensity values. The notion of resolution is valid in either
domain.
 Most often it refers to the resolution in sampling.
—Extend the principles of multi-rate processing from
standard digital signal processing.
 It also can refer to the number of quantization levels.
Spatial and Intensity Resolution
 Spatial resolution
— A measure of the smallest discernible detail in an image
— stated with line pairs per unit distance, dots (pixels) per unit
distance, dots per inch (dpi)

 Intensity resolution
— The smallest discernible change in intensity level
— stated with 8 bits, 12 bits, 16 bits, etc.

Weeks 1 & 2 33
Spatial and Intensity Resolution

34
- END -

You might also like