0% found this document useful (0 votes)
8 views9 pages

CG UNIT4

Computer Graphics important questions and notes

Uploaded by

mayanknatholia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views9 pages

CG UNIT4

Computer Graphics important questions and notes

Uploaded by

mayanknatholia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

COMPUTER GRAPHICS

UNIT-4
Digital image:
A digital image is a representation of a real image as a set of numbers that can be stored and
handled by a digital computer. In order to translate the image into numbers, it is divided into
small areas called pixels (picture elements).

Types of digital image:


1. Raster image
Raster images have a finite set of digital values, called picture elements or pixels. The digital
image contains a fixed number of rows and columns of pixels. Pixels are the smallest individual
element in an image, holding antiquated values that represent the brightness of a given color at
any specific point.
Typically, the pixels are stored in computer memory as a raster image or raster map, a two-
dimensional array of small integers. These values are often transmitted or stored in a
compressed form.

2. Vector image:
Vector images resulted from mathematical geometry (vector). In mathematical terms, a vector
consists of both a magnitude, or length, and a direction.
Often, both raster and vector elements will be combined in one image; for example, in the case
of a billboard with text (vector) and photographs (raster).
Example of vector file types are EPS, PDF, and AI.

How is a digital image captured:


A digital camera uses an array of millions of tiny light cavities or "photosites" to record an
image. When you press your camera's shutter button and the exposure begins, each of these is
uncovered to collect photons and store those as an electrical signal.
image capture (image acquisition) The process of obtaining a digital image from a vision sensor,
such as a camera. Usually this entails a hardware interface known as a frame grabber, which
captures single frames of video, converts the analogue values to digital, and feeds the result
into the computer memory.
When a user presses the capture button, CameraX invokes takePicture() , and the ring buffer
retrieves the captured frame with the timestamp that is closest to that of the button press.
CameraX then reprocesses the capture session to generate an image from that frame, which is
saved to disk in JPEG format.

Store a digital image:


Typically, the pixels are stored in computer memory as a raster image or raster map, a two-
dimensional array of small integers. These values are often transmitted or stored in a
compressed form.
All digital imaging systems have one or more components (media) in or on which the digital
images are stored. Here we are representing these collectively just as the "Storage Media".
Later we will explore the various technologies that make up the storage media.
Writing and reading refer to the process of transferring image data to and from the storage
media.
Characteristics of any storage media that must be considered include:
 Capacity (Number of images that can be stored)
 Speed (Time required to write/record and read/retrieve images)
 Reliability and Security (To prevent loss of images)

Digital Image Processing:


 Digital Image Processing is a software which is used in image processing. For example:
computer graphics, signals, photography, camera mechanism, pixels, etc.
 Digital Image Processing provides a platform to perform various operations like image
enhancing, processing of analog and digital signals, image signals, voice signals etc.
 It provides images in different formats.

Characteristics of Digital Image Processing:


 It uses software, and some are free of cost.
 It provides clear images.
 Digital Image Processing do image enhancement to recollect the data through images.
 It is used widely everywhere in many fields.
 It reduces the complexity of digital image processing.
 It is used to support a better experience of life.

Advantages of Digital Image Processing:


 Image reconstruction (CT, MRI, SPECT, PET)
 Image reformatting (multi-plane, multi-view reconstructions)
 Fast image storage and retrieval
 Fast and high-quality image distribution.
 Controlled viewing (windowing, zooming)

Disadvantages of Digital Image Processing:


 It is very much time-consuming.
 It is very much costly depending on the particular system.
 Qualified persons can be used.

Application areas of DIP:


1. Image sharpening and restoration
It refers to the process in which we can modify the look and feel of an image. It basically
manipulates the images and achieves the desired output. It includes conversion, sharpening,
blurring, detecting edges, retrieval, and recognition of images.

2. Medical Field
There are several applications under medical field which depends on the functioning of digital
image processing.
 Gamma-ray imaging
 PET scan
 X-Ray Imaging
 Medical CT scan
 UV imaging
3. Robot vision
There are several robotic machines which work on the digital image processing. Through image
processing technique robot finds their ways, for example, hurdle detection root and line
follower robot.

4. Pattern recognition
It involves the study of image processing, it is also combined with artificial intelligence such that
computer-aided diagnosis, handwriting recognition and images recognition can be easily
implemented. Now a days, image processing is used for pattern recognition.

5. Video processing
It is also one of the applications of digital image processing. A collection of frames or pictures
are arranged in such a way that it makes the fast movement of pictures. It involves frame rate
conversion, motion detection, reduction of noise and color space conversion etc.
File forms:
1. JPEG (Joint Photographic Experts Group)
The full form of JPEG is the Joint Photographic Experts Group. The JPEG format is used to store
the information in a small size of the file. The digital camera produces the digital image in the
format of JPEG because the file size is smaller for JPEG format. JPEG is not used when the image
is used as logos or drawings as the image is in compressed form and when it is zoomed the
quality of the image gets decreased.

2. TIFF (Tagged Image File Format)


The full form of TIFF is the Tagged Image File Format. The file size of TIFF digital image format is
comparatively large from the JPEG format. The file size is larger because the images are present
in uncompressed form. The photo software produces the TIFF format image in Photoshop.

3. PNG (Portable Network Graphics)


The full form of PNG is Portable Network Graphics. Most web images present on the internet is
present in the format of PNG. The PNG format is not used as the file size is larger for PNG
images compare to JPEG images. But the image of text is of small size with great quality of
image resolution. When any user takes the screenshot of the screen of a computer the image is
stored in the format of PNG. As the image is a mixed image that contains text and pictures the
image is stored in the format or PNG format.

4. GIF (Graphics Interchange Format)


GIF, in full graphics interchange format, digital file format devised in 1987 by the Internet
service provider CompuServe as a means of reducing the size of images and short animations.
Because GIF is a lossless data compression format, meaning that no information is lost in the
compression, it quickly became a popular format for transmitting and storing graphic files.

Basic digital Image processing techniques:


A. Anti-aliasing:
Antialiasing is a computer graphics method that removes the aliasing effect. The aliasing
effect occurs when rasterized images have jagged edges, sometimes called "jaggies" (an image
rendered using pixels). Technically, jagged edges are a problem that arises when scan
conversion is done with low-frequency sampling, also known as under-sampling, this under-
sampling causes distortion of the image. Moreover, when real-world objects made of
continuous, smooth curves are rasterized using pixels, aliasing occurs.
Under-sampling is an important factor in anti-aliasing. The information in the image is lost when
the sample size is too small. When sampling is done at a frequency lower than the Nyquist
sampling frequency, under-sampling takes place. We must have a sampling frequency that is at
least two times higher than the highest frequency appearing in the image in order to prevent
this loss.

Anti-Aliasing Methods:
1. Using High-Resolution Display:
Displaying objects at a greater resolution is one technique to decrease aliasing impact and
boost the sampling rate. When using high resolution, the jaggies are reduced to a size that
renders them invisible to the human eye. As a result, sharp edges get blurred and appear
smooth.

2. Post-Filtering or Super-Sampling:
With this technique, we reduce the adequate pixel size while improving the sampling
resolution by treating the screen as though it were formed of a much finer grid. The screen
resolution, however, does not change. Now, the average pixel intensity is determined from the
average of the intensities of the subpixels after each subpixel's intensity has been calculated.
In order to display the image at a lesser resolution or screen resolution, we do sampling at a
higher resolution, a process known as Super sampling. Due to the fact that this process is
carried out after creating the rasterized image, this technique is also known as post filtration.
Its computational cost is lower. Companies that produce graphics cards, such as CSAA by
NVIDIA and CFAA by AMD, are working to improve and advance super-sampling techniques.

3. Pre-Filtering or Area-Sampling:
The areas of each pixel's overlap with the objects displayed are taken into account while
calculating pixel intensities in area sampling. In this case, the computation of pixel color is
centered on the overlap of scene objects with a pixel region.

4. Pixel Phasing:

It is a method to eliminate aliasing. In this case, pixel coordinates are altered to virtually exact
positions close to object geometry. For dispersing intensities and aiding with pixel phasing,
some systems let you change the size of individual pixels.

Application of Anti-Aliasing:
1. Compensating for Line Intensity Differences -
Despite the diagonal line being 1.414 times larger than the horizontal line when a horizontal
line and a diagonal line are plotted on a raster display, the number of pixels needed to depict
both lines are the same. The extended line's intensity decreases as a result. Anti-aliasing
techniques are used to allocate the intensity of pixels in accordance with the length of the line
to make up for this loss of intensity.

2. Anti-Aliasing Area Boundaries -


Jaggies along area boundaries can be eliminated using anti-aliasing principles. These techniques
can be used to smooth out area borders in scanline algorithms. If moving pixels is an option,
they are moved to positions nearer the edges of the area. Other techniques modify the amount
of pixel area inside the boundary by adjusting the pixel intensity at the boundary position. Area
borders are effectively rounded off using these techniques.

B. Concept of Convolution
Convolution is used for many things like calculating derivatives, detect edges, apply blurs etc.
and all this is done using a "convolution kernel". A convolution kernel is a very small matrix and,
in this matrix, each cell has a number and also an anchor point.
The anchor point is used to know the position of the kernel with respect to the image. It starts
at the top left corner of the image and moves on each pixel sequentially. Kernel overlaps few
pixels at each position on the image. Each pixel which is overlapped is multiplied and then
added. And the sum is set as the value of the current position.

Convolution is the process in which each element of the image is added to its local neighbors,
and then it is weighted by the kernel. It is related to a form of mathematical convolution.
In Convolution, the matrix does not perform traditional matrix multiplication but it is denoted
by *.

pseudo code to describe the convolution process:


1. For each image row in input image:
2. For each pixel in image row:
3. Set accumulator to zero
4. For each kernel row in kernel:
5. For each element in kernel row:
6. If element position corresponding* to pixel position then
7. Multiply element value corresponding*to pixel value
8. Add result to accumulator
9. Endif
10. Set output image pixel to accumulator
 Convolution can be computed using multiple for loops. But using for loops causes a lot of
repeated calculation and also the size of image and kernel increases. Using Discrete Fourier
Transform technique calculating convolution can be done rapidly. In this technique, the
entire convolution operation is converted into a simple multiplication.
 In convolution, the problem occurs when the kernel is near the edge or corners because the
kernel is two dimensional.

To overcome these problems following things can be done:


1. Ones can be ignored
2. Extra pixels can be created near the edges.

Extra pixels can be created in the following ways:


1. Duplicate edge pixel.
2. Reflect edges
3. Pixels can be copied from the other end.
C. Thresholding:
Thresholding enables to achieve image segmentation in the easiest way. Image segmentation
means dividing the complete image into a set of pixels in such a way that the pixels in each set
have some common characteristics. Image segmentation is highly useful in defining objects and
their boundaries.
In this chapter we perform some basic thresholding operations on images.
We use OpenCV function threshold. It can be found under Imgproc package.
Syntax:-
Imgproc.threshold(source, destination, thresh, maxval , type);

The parameters are described below −

Sr.No. Parameter & Description


1 source
It is source image.

2 destination
It is destination image.

3 thresh
It is threshold value.

4 maxval
It is the maximum value to be used with the THRESH_BINARY and
THRESH_BINARY_INV threshold types.

5 type
The possible types are THRESH_BINARY, THRESH_BINARY_INV,
THRESH_TRUNC, and THRESH_TOZERO.
Apart from these thresholding methods, there are other methods provided by the
Imgproc class. They are described briefly −

Sr.No. Method & Description

1 cvtColor(Mat src, Mat dst, int code, int dstCn)


It converts an image from one color space to another.

2 dilate(Mat src, Mat dst, Mat kernel)


It dilates an image by using a specific structuring element.

3 equalizeHist(Mat src, Mat dst)


It equalizes the histogram of a grayscale image.

4 filter2D(Mat src, Mat dst, int ddepth, Mat kernel, Point anchor,
double delta)
It convolves an image with the kernel.

5 GaussianBlur(Mat src, Mat dst, Size ksize, double sigmaX)


It blurs an image using a Gaussian filter.

6 integral(Mat src, Mat sum)


It calculates the integral of an image.

Image Enhancement:
Image enhancement refers to the process of highlighting certain information of an image, as
well as weakening or removing any unnecessary information according to specific needs. For
example, eliminating noise, revealing blurred details, and adjusting levels to highlight features
of an image.
Image enhancement techniques can be divided into two broad categories:
1. Spatial domain — enhancement of the image space that divides an image into uniform
pixels according to the spatial coordinates with a particular resolution. The spatial domain
methods perform operations on pixels directly.

Types of spatial domain operator:


 Point operation (intensity transformation) - Point operations refer to running the same
conversion operation for each pixel in a grayscale image. The transformation is based on the
original pixel and is independent of its location or neighboring pixels.
 Spatial filter (or mask, kernel) - The output value depends on the values of f(x,y) and its
neighborhood.

2. Frequency domain — enhancement obtained by applying the Fourier Transform to the


spatial domain. In the frequency domain, pixels are operated in groups as well as indirectly.
Here are some examples of image enhancement:
 Smooth and sharpen
 Noise removal
 Deblur images
 Contrast adjustment
 Brighten an image
 Grayscale image histogram equalization

You might also like