03 Digitization
03 Digitization
Image Acquisition
Image Sensors
Digitization
Sampling
Quantization
General Steps in image processing
Ultrasonic spectrum
Ultrasound image
acquisition device
Ultrasonic Baby image
during pragnancy
Sensors used in Image Capturing
Image sensors are used to detect and conveys information that used to make an
image. When the image sensor works, it converts the variable attenuation of light
waves( light pass through objects or reflects off by objects) or electromagnetic
radiation into signals, small bursts of current that convey information. These
resultant electrical signals can be viewed, analyzed or stored.
Image sensors are a solid-state device and serve as one of the most important
components inside a machine vision camera. Image sensors can be classified
according to several criteria as follows:
Structure type — CCD (Charged Coupled Device) or CMOS (Complementary
Metal Oxide Semiconductor)
Chroma type — Color or Monochromatic
Shutter type — Global shutter or Rolling shutter
Other than these criteria, image sensors can also be classified according to
the resolution, frame rate, pixel size sensor format.
How typical image sensor works inside a
In a camera system, through a lens or other optics, the image sensor will receive incident light
which are photons. Then if the sensor camera
is CCD (Charged Coupled Device) it will transfer
information into a voltage or if the sensor is CMOS (Complementary Metal Oxide
Semiconductor) it will transform information into a digital signal. CMOS sensors convert
photons into electrons, then to a voltage, and then into a digital value using an on-chip Analog
to Digital Converter (ADC).
Different camera manufacturers use different general layouts and components in the camera.
The main purpose of this layout is to convert light into a digital signal which can then be
analyzed to trigger some future action. Consumer level cameras would have additional
components for image storage (memory card), viewing (embedded LCD) and control knobs
and switches that machine vision cameras do not.
Typical sensor functions involve:
How a digital image is formed.
Since capturing an image from a camera is a physical process. The sunlight
is used as a source of energy. A sensor array is used for the acquisition of
the image. So when the sunlight falls upon the object, then the amount of
light reflected by that object is sensed by the sensors, and a continuous
voltage signal is generated by the amount of sensed data.
The output of most of the image sensors is an analog signal, to create an
image which is digital, we need to convert continuous data into digital
form.
Analog to Digital
Since, most of the output of image sensors is in the form of analog signal and digital
image processing and its techniques cannot be applied in this form because the output of the
image sensors requires infinite memory to store a signal that can have infinite values.
To create a digital image, the continuous data is converted into digital form. An image
function f(x , y) must be digitized both spatially and in amplitude. The conversion from analog
to digital involves two processes: sampling and quantization.
Sampling :
Digitizing the co-ordinate value (x axis).
The sampling rate determines the spatial
resolution of the digitized image
Quantization :
Digitizing the amplitude value (y axis).
Quantization level determines the number Analog to digital conversion
of grey levels in the digitized image
Digitalization
Values of pixels
Oversampling is used for zooming. The difference between sampling and zooming is that sampling is done
on signals while zooming is done on the digital image.
Sampling
Reduction in Sampling Resolution
Two possibilities
Downsampling
Decimation
Effects in Reducing Spatial Resolution
Ugly contours (by steps)
Blur effect
Details are less precise/detectable
Resolution loose
Spatial Resolution Effects
128 64 32
Increase in Sampling Resolution
Interpolation (sometimes called resampling)
— Process of using known data to estimate unknown values
— It refers to the “guess” of intensity values at missing locations, i.e., x and y
can be arbitrary. (e.g., zooming, shrinking, rotating, and geometric correction)
— an imaging method to increase (or decrease) the number of pixels in a digital
image.
Note: Some digital cameras use interpolation to produce a larger image than the sensor captured or to create digital zoom.
Image Interpolation
Image interpolation works in two directions, and tries to achieve a
best approximation of a pixel's color and intensity based on the
values at surrounding pixels. The following examples illustrates how
resizing / enlargement works:
Common Interpolation Method
Nearest neighbor
interpolation
Bilinear interpolation
Bicubic interpolation
Nearest Neighbor Interpolation
Nearest neighbor is the most basic and requires the least processing time
of all the interpolation algorithms because it only considers one pixel —
the closest one to the interpolated point. This has the effect of simply
making each pixel bigger.
Bilinear Interpolation
Bilinear interpolation considers the closest
2x2 neighborhood of known pixel values
surrounding the unknown pixel. It then
takes a weighted average of these 4 pixels
to arrive at its final interpolated value. This
results in much smoother looking images
than nearest neighbor.
The image to the right is for a case when all
known pixel distances are equal, so the
interpolated value is simply their sum
divided by four.
Bicubic Interpolation
Bicubic goes one step beyond bilinear by
considering the closest 4x4 neighborhood of
known pixels — for a total of 16 pixels. Since
these are at various distances from the
unknown pixel, closer pixels are given a
higher weighting in the calculation.
Bicubic produces noticeably sharper images
than the previous two methods, and is
perhaps the ideal combination of processing
time and output quality. For this reason it is
a standard in many image editing programs
(including Adobe Photoshop), printer drivers
and in-camera interpolation.
Interpolation Example
Higher Order Interpolation: Spline & Sinc
256 128 16 8
64 32 4 2
Resolution
Intensity resolution
— The smallest discernible change in intensity level
— stated with 8 bits, 12 bits, 16 bits, etc.
Weeks 1 & 2 33
Spatial and Intensity Resolution
34
- END -