CG UNIT4
CG UNIT4
UNIT-4
Digital image:
A digital image is a representation of a real image as a set of numbers that can be stored and
handled by a digital computer. In order to translate the image into numbers, it is divided into
small areas called pixels (picture elements).
2. Vector image:
Vector images resulted from mathematical geometry (vector). In mathematical terms, a vector
consists of both a magnitude, or length, and a direction.
Often, both raster and vector elements will be combined in one image; for example, in the case
of a billboard with text (vector) and photographs (raster).
Example of vector file types are EPS, PDF, and AI.
2. Medical Field
There are several applications under medical field which depends on the functioning of digital
image processing.
Gamma-ray imaging
PET scan
X-Ray Imaging
Medical CT scan
UV imaging
3. Robot vision
There are several robotic machines which work on the digital image processing. Through image
processing technique robot finds their ways, for example, hurdle detection root and line
follower robot.
4. Pattern recognition
It involves the study of image processing, it is also combined with artificial intelligence such that
computer-aided diagnosis, handwriting recognition and images recognition can be easily
implemented. Now a days, image processing is used for pattern recognition.
5. Video processing
It is also one of the applications of digital image processing. A collection of frames or pictures
are arranged in such a way that it makes the fast movement of pictures. It involves frame rate
conversion, motion detection, reduction of noise and color space conversion etc.
File forms:
1. JPEG (Joint Photographic Experts Group)
The full form of JPEG is the Joint Photographic Experts Group. The JPEG format is used to store
the information in a small size of the file. The digital camera produces the digital image in the
format of JPEG because the file size is smaller for JPEG format. JPEG is not used when the image
is used as logos or drawings as the image is in compressed form and when it is zoomed the
quality of the image gets decreased.
Anti-Aliasing Methods:
1. Using High-Resolution Display:
Displaying objects at a greater resolution is one technique to decrease aliasing impact and
boost the sampling rate. When using high resolution, the jaggies are reduced to a size that
renders them invisible to the human eye. As a result, sharp edges get blurred and appear
smooth.
2. Post-Filtering or Super-Sampling:
With this technique, we reduce the adequate pixel size while improving the sampling
resolution by treating the screen as though it were formed of a much finer grid. The screen
resolution, however, does not change. Now, the average pixel intensity is determined from the
average of the intensities of the subpixels after each subpixel's intensity has been calculated.
In order to display the image at a lesser resolution or screen resolution, we do sampling at a
higher resolution, a process known as Super sampling. Due to the fact that this process is
carried out after creating the rasterized image, this technique is also known as post filtration.
Its computational cost is lower. Companies that produce graphics cards, such as CSAA by
NVIDIA and CFAA by AMD, are working to improve and advance super-sampling techniques.
3. Pre-Filtering or Area-Sampling:
The areas of each pixel's overlap with the objects displayed are taken into account while
calculating pixel intensities in area sampling. In this case, the computation of pixel color is
centered on the overlap of scene objects with a pixel region.
4. Pixel Phasing:
It is a method to eliminate aliasing. In this case, pixel coordinates are altered to virtually exact
positions close to object geometry. For dispersing intensities and aiding with pixel phasing,
some systems let you change the size of individual pixels.
Application of Anti-Aliasing:
1. Compensating for Line Intensity Differences -
Despite the diagonal line being 1.414 times larger than the horizontal line when a horizontal
line and a diagonal line are plotted on a raster display, the number of pixels needed to depict
both lines are the same. The extended line's intensity decreases as a result. Anti-aliasing
techniques are used to allocate the intensity of pixels in accordance with the length of the line
to make up for this loss of intensity.
B. Concept of Convolution
Convolution is used for many things like calculating derivatives, detect edges, apply blurs etc.
and all this is done using a "convolution kernel". A convolution kernel is a very small matrix and,
in this matrix, each cell has a number and also an anchor point.
The anchor point is used to know the position of the kernel with respect to the image. It starts
at the top left corner of the image and moves on each pixel sequentially. Kernel overlaps few
pixels at each position on the image. Each pixel which is overlapped is multiplied and then
added. And the sum is set as the value of the current position.
Convolution is the process in which each element of the image is added to its local neighbors,
and then it is weighted by the kernel. It is related to a form of mathematical convolution.
In Convolution, the matrix does not perform traditional matrix multiplication but it is denoted
by *.
2 destination
It is destination image.
3 thresh
It is threshold value.
4 maxval
It is the maximum value to be used with the THRESH_BINARY and
THRESH_BINARY_INV threshold types.
5 type
The possible types are THRESH_BINARY, THRESH_BINARY_INV,
THRESH_TRUNC, and THRESH_TOZERO.
Apart from these thresholding methods, there are other methods provided by the
Imgproc class. They are described briefly −
4 filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor,
double delta)
It convolves an image with the kernel.
Image Enhancement:
Image enhancement refers to the process of highlighting certain information of an image, as
well as weakening or removing any unnecessary information according to specific needs. For
example, eliminating noise, revealing blurred details, and adjusting levels to highlight features
of an image.
Image enhancement techniques can be divided into two broad categories:
1. Spatial domain — enhancement of the image space that divides an image into uniform
pixels according to the spatial coordinates with a particular resolution. The spatial domain
methods perform operations on pixels directly.