0% found this document useful (0 votes)
21 views

Chapter 3 Images and Graphics

Chapter 3 covers digital image representation, formats, and processing techniques, detailing how images are stored, analyzed, and transmitted. It discusses various color models, image resolutions, and the differences between raster and vector graphics. Additionally, it highlights the dynamics in graphics and the framework for interactive graphics systems, emphasizing user interaction and control.

Uploaded by

mrankit9899
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Chapter 3 Images and Graphics

Chapter 3 covers digital image representation, formats, and processing techniques, detailing how images are stored, analyzed, and transmitted. It discusses various color models, image resolutions, and the differences between raster and vector graphics. Additionally, it highlights the dynamics in graphics and the framework for interactive graphics systems, emphasizing user interaction and control.

Uploaded by

mrankit9899
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

Chapter 3: Images and

Graphics
• Digital Image Representation
• Image and graphics Format
• Image Synthesis , analysis and Transmission
Nature of Digital Images
• Image is the spatial representation of an object. It may be 2D or 3D
scene or another image.
• Images may be real or virtual. It can be abstractly thought of as
continuous function defining usually a rectangular region of plane.
• Example:
▪ Recorded image- photographic, or in digital format
▪ Computer vision- video image, digital image or picture
▪ Computer graphics- digital image
▪ Multimedia- deals about all above formats
Digital Image Representation
• A digital image is represented by a matrix of numeric values each representing a
quantized intensity value.
• When I is a two-dimensional matrix, then I(r, c) is the intensity value at the position
corresponding to row r and column c of the matrix.
• The points at which an image is sampled are known as picture elements, commonly
abbreviated as pixels. The pixel values of intensity images are called gray scale levels.
• Image resolution: The number of pixels in a digital image ( higher resolution → better
quality).
• The intensity at each pixel is represented by an integer and is determined from the
continuous image by averaging over a small neighborhood around the pixel location.
• If there are just two intensity values, for example, black, and white, they are represented
by the numbers 0 and 1; such images are called binary-valued images.
• If 8-bit integers are used to store each pixel value, the gray levels range from 0 (black) to
255 (white).
Digital Image Representation
• Intensity value can be represented by:
▪ 1-bit: black & white images
▪ 8-bits: grayscale images .
▪ 8-bit color images
▪ 24-bits: color images (RGB)
1 Bit Images
• Monochrome Image:
• Each pixel is stored as a single bit (0 or 1),
• A 640 x 480 monochrome image requires 37.5 KB of storage.
• Image dimensions: 640 x 480 pixels
• Total number of pixels: 640 * 480 = 307,200 pixels
• Total number of bytes needed: 307,200 bits / 8 = 38,400 bytes
• Total number of kilobytes: 38,400 bytes / 1024 = 37.5 KB
8-BIT GRAY-LEVEL IMAGES

• Each pixel has a gray- value between 0 and 255.


• Each pixel is represented by a single byte: e.g. dark pixels might have a
value of 10, and a bright one might be 230.
• Each pixel's brightness level can be represented by an 8-bit value (ranging
from 0 to 255), which is common in images.
• A 640 x 480 grayscale image requires over 300 KB of storage.
• Total number of bits needed: 307,200 pixels * 8 bits/pixel = 2,457,600 bits
• Total number of bytes: 2,457,600 bits / 8 bits/byte = 307,200 bytes
• Total number of kilobytes: 307,200 bytes / 1024 bytes/KB ≈ 300 KB
• Suitable for black and white images, medical scans
8-BIT COLOR IMAGES

• Many systems can make use of 8 bits of color information (the so-
called “256 colors”) in producing a screen image.
• With 8 bits per pixel and color lookup table we can display at most
256 distinct colors at a time
• Basically, the image stores not color, but instead just a set of bytes,
each of which is actually an index into a table with 3- byte values that
specify the color for a pixel with that lookup table index.
• Suitable for colorful photos, graphics, digital art etc.
24 Bit Color Images

• In a color 24- bit image, each pixel is represented by three bytes,


usually representing R, G, B.
• This format supports 256 x 256 x 256 possible combined colors, or a
total of 16,777,216 possible colors.
• Such flexibility does result in a storage penalty: A 640 x bit color
image would require KB of storage without any compression.
An image in 8-bit and 24-bit color
Resolution
• Resolution measures how much detail an image can have.
• It determines how clear and sharp the image appears.
• Higher resolution images have more pixels, which results in finer
details and smoother edges.
• Image resolution is the number of pixels per image.
• 320 X 240 = 76800 pixels, 700 X 400 = 280000 pixels
Resolution
Resolution
300 PPI( pixel per inch): that there are 300 pixels arranged
horizontally and 300 pixels arranged vertically within each inch
of the display or printed medium

• Example: Let's consider two images of a cat, one with a low resolution and the
other with a high resolution:
• Low Resolution:
▪ Image Size: 800x600 pixels
▪ This image appears blurry and pixelated when enlarged.
▪ Details like fur texture and facial features might not be well-defined.
• High Resolution:
▪ Image Size: 4000x3000 pixels
▪ This image looks sharp and clear even when enlarged.
▪ You can see fine details like individual strands of fur and intricate facial expressions.
• In this example, the high-resolution image has a greater pixel density, which
results in a better-defined and more detailed representation of the cat compared
to the low-resolution image.
Vector Graphics
• Vector graphics are a type of computer graphics that use
mathematical objects such as points, lines, curves, and shapes to
represent images.
• They are resolution-independent, which means that they can be
scaled to any size without losing quality.
• They are smaller in file size than raster graphics, making them ideal
for web use.
• This scalability makes vector graphics ideal for various applications,
including logos, illustrations, typography, and animations.
Vector VS Bitmap
• Class Assignment
Image and Graphics file format
• A digital image is stored in a file conforming to certain format. In addition
to pixel data, the file contains information to identify and decode data:
▪ Format
▪ Image size
▪ Depth
▪ Color and palette
▪ Compression
• Some formats are defined to work only in certain platform while other can
be used for all platforms. Some formats are specific for an application.
Some formats are for images, others are for vector graphics. Some format
allows compression, other allows only raw data.
• Formats using compression will make file size smaller. Some compression
algorithms will lose some image information.
Image file format
Format Extension
JPEG (Joint Photographic Expert Group) .jpg, .jpeg
PNG (Portable Network Graphics) .png
GIF (Graphics Interchange Format) .gif
BMP (Bitmap) .bmp
TIFF (Tagged Image File Format) .tiff, .tif
SVG (Scalable Vector Graphics) .svg
EPS (Encapsulated PostScript) .e[s
AI (Adobe Illustrator) .ai
PSD (Adobe Photoshop) .psd
WebP .webp
HEIF (High Efficiency Image Format) .heif, .heic
Color Systems
• Color is what we see when light interacts with things around us.
• It is the way we perceive the different wavelengths of light.
• The human eye can see about 1 million different colors, which are
created by mixing different wavelengths of light.
• For example, when white light hits an object, some of the
wavelengths are absorbed by the object and some are reflected. The
reflected wavelengths are what we see as the object's color. So, a red
apple reflects red light and absorbs all other wavelengths.
Colour System
• Colour is a vital component of multimedia. Colour management is
both subjective and technical exercise, because:
▪ Colour is a physical property of light but
▪ Colour perception is a human physiological activity
▪ Choosing a right colour or colour combination involves many trials and
aesthetic judgement
▪ Colour is a physical property of light, and it is determined by the wavelength
of the light wave. The human eye can see light waves in the range of 380 to
760 nanometers, which is why we can see the colors that we do.
RGB Colour Model
• RGB stands for Red, Green, Blue.
• It is probably the most popular colour model used in computer
graphics.
• It is an additive system in which varying amount of three colours red,
green and blue are added to produce new colours.
• All other colors can be created by mixing these three primary colors in
different ways.
• The amount of each primary color is represented by a value between
0 and 255, where 0 is the absence of the color and 255 is the full
intensity of the color.
RGB Colour Model
• For example, the color red is represented by the value (255, 0, 0),
which means that the red component is at full intensity and the green
and blue components are at zero intensity.
• The RGB color model is used in many digital devices, such as
computer monitors, televisions, and cameras.
RGB Colour Model
• RGB color model can be used to create different colors:
▪ Mixture of red and green light creates yellow light.
▪ Mixture of red and blue light creates magenta light.
▪ Mixture of green and blue light creates cyan light.
▪ Mixture of all three primary colors (red, green, and blue) creates white light.
▪ No light at all creates black.
CMY Color Model
• The three primary colors in the CMY model are cyan, magenta, and yellow.
• Cyan, Magenta and Yellow should absorb all the light thus resulting in
black.
• It is a subtractive color model, which means that colors are created by
subtracting different wavelengths of light.
• The amount of each primary color is represented by a value between 0 and
1, where 0 is the absence of the color and 1 is the full intensity of the color.
• For example, the color red is represented by the value (0, 1, 1), which
means that the cyan and magenta components are at zero intensity and
the yellow component is at full intensity.
• It is used in printing, where it is used to create inks that can be used to
reproduce colors on paper.
CMY Color Model

• How the CMYK color model can be used to create different colors:
▪ Mixture of cyan and yellow light creates green light.
▪ Mixture of magenta and yellow light creates red light.
▪ Mixture of cyan, magenta, and yellow light creates black.
▪ Adding black to any of the colors will make the color darker.
• The CMYK color model is not as widely used in digital devices as the
RGB color model, because it is not as accurate.
HSB Colour Model

• HSB color model is a way of representing colors by their hue, saturation,


and brightness.
• It's a way to represent colors in a more intuitive and human-friendly
manner.
• Hue is identified by the name of the colour. It is measured as a location on
the standard colour wheel as a degree between 0 degree to 360 degree.
• Saturation is the strength or purity of the color. It represents the amount
of gray proportion to the hue and is measured as a percentage from 0%
(gray) to 100% (fully saturated).
• Brightness is the relative lightness or brightness of colour. It is measured as
a percentage from 0% (black) to 100% (white).
YUV Colour Model
• YUV color model is a color space that represents colors using three
components: luma (Y), blue difference (U), and red difference (V).
• Luma (Y) is the brightness or lightness of the color. Higher Y values
indicate brighter areas, and lower values indicate darker areas.
• Blue difference (U) is the difference between the blue and luma
components. Positive U values indicate more blue, negative values
indicate less blue.
• Red difference (V) is the difference between the red and luma
components. Positive V values indicate more red, negative values
indicate less red.
YUV Model
• YUV colors:
▪ Black = 0, 0, 0
▪ White = 255, 255, 255
▪ Red = 255, 0, 0
▪ Green = 0, 255, 0
▪ Blue = 0, 0, 255
▪ Yellow = 255, 255, 0
▪ Magenta = 255, 0, 255
▪ Cyan = 0, 255, 255
• A Y value of 255 indicates maximum brightness. A U value of 255 means
that there is a maximum amount of blue color information relative to the
brightness. A V value of 255 indicates a maximum amount of red color
information relative to the brightness.
Computer Image Processing
• Image processing is a method to perform some operations on an
image, in order to get an enhanced image or to extract some useful
information from it.
• Computer image processing comprises of image synthesis
(generation) and image analysis (recognition).
• It is a type of signal processing in which input is an image and output
may be image or characteristics/features associated with that image.
Computer Image Processing
• Processing basically includes the following three steps:
▪ Importing the image via image acquisition tools;
▪ Analyzing and manipulating the image;
▪ Output in which result can be altered image or report that is based on image
analysis.
• There are two types of methods used for image processing namely,
analog and digital image processing.
Computer Image Processing
• Analog image processing is the manipulation of continuous-valued images.
It is performed by electronic devices such as cameras, scanners, and
televisions. Analog image processing techniques are typically used for real-
time applications, such as video surveillance and medical imaging.
• Digital image processing is the manipulation of discrete-valued images. It is
performed by computers. Digital image processing techniques are typically
used for offline applications, such as image editing and computer vision.
• The three general phases that all types of data have to undergo while using
digital technique are pre-processing (involve tasks such as noise removal,
image resizing, and image segmentation), enhancement (involve tasks such
as sharpening, contrast enhancement, and brightness adjustment) , and
display, information extraction (involve tasks such as object detection,
image classification, and face recognition).
Dynamics in Graphics
• Dynamic graphics involve making things move on a computer screen
using data i.e., simulating motion or movement using the computer.
• Examples: animations, and tours.
• An animation is like showing a series of pictures in the order they
happened, like steps in a story. We can show how things improve or
change step by step, like showing the growth of plants.
• Tours help you explore different views of something by moving
around it.
Dynamics in Graphics
• Motion Dynamic:
▪ It describes changes over time in an image
▪ With motion dynamic, objects can be moved and enabled with respect to a
stationary observer.
▪ Tracking a moving car in a surveillance video
• Update Dynamic:
▪ It describes changes in an image's content over time due to updates or
modifications
▪ Update dynamic is the actual change of the shape, color, or other properties
of the objects being viewed.
▪ Watching the colors of a traffic light change from green to yellow to red.
▪ Displaying real-time stock market data with price changes
Framework of Interactive Graphics System
• In interactive Computer Graphics user have some controls over the
picture, i.e., the user can make any change in the produced image.
• Interactive Computer Graphics require two-way communication
between the computer and the user.
• A User can see the image and make any change by sending his
command with an input device.
• The framework of interactive graphics systems have following three
components:
i. Application Model
ii. Application Program
iii. Graphics System
Framework of Interactive Graphics System
Framework of Interactive Graphics System
Application model:
• The application model represents the data or objects to be pictured
on the screen; it is stored in an application database.
• It holds basic shapes and details that make up the objects including
facts about how the objects look and how they fit together.
• The model is application-specific and is created independently of any
particular display system.
Framework of Interactive Graphics System
Application program:
• It maps application objects to views (images) of those objects by
calling on graphics library.
• Application model may contain lots of non-graphical data (e.g., details
about objects).
Framework of Interactive Graphics System
Graphics system:
• Graphics library/package is intermediary between application and
display hardware (Graphics System)
Graphics input/ output hardware
Graphics Hardware – Input:
• The current way we interact with technology includes common tools like the
mouse, the data tablet, and the touch-sensitive screen.
• Other input devices can sense 3D and higher dimensional input values, such as
track-balls, space balls, and data gloves.
• Track-balls can sense rotation about the vertical axis, but there is no direct
relationship between hand movements and the corresponding movement in 3D
space. It can be rotated with the fingers to control the movement of a cursor on
a screen.
• Space balls are rigid spheres that can be pushed or pulled in any direction to
provide 3D translation and orientation.
• Data gloves record hand position, orientation, and finger movements. They can
be used to grasp, move, and rotate objects. It is covered with sensors that track
the position and movement of the hand and fingers.
Graphics Hardware –Output
• Current output technology uses raster displays, which store display
primitives in a refresh buffer in terms of their component pixels.
• In some raster displays, there is a hardware display controller that
receives and interprets sequences of output commands.
• In simpler systems, like personal computers, the display controller is
just a part of the graphics software. The refresh buffer is a part of the
computer's memory. The image display part takes this information to
create the final picture on the screen.
Architecture of Raster Display
Dithering
• Dithering is the process by which we create illusions of the color that are not
present actually.
• It is done by the random arrangement of pixels.
• Consider the given image:

• This is an image with only black and white pixels in it. Its pixels are arranged in an
order to form another image that is shown below. Note at the arrangement of
pixels has been changed, but not the quantity of pixels.

• Dithering is a versatile technique that can be used to improve the quality of


images and audio files in a variety of multimedia applications.
Image Analysis
• Image analysis is the process of extracting information from images. This
information can be used for a variety of purposes, such as object
recognition, shape description, and measurement.
• Just knowing where a single dot (pixel) is and what color it has doesn't tell
us much about what's in the picture. We need more details to understand
things like what an object is, how it looks, where it's placed, if it's broken,
and how far parts are from each other.
• Imagine a picture of a house. Knowing just the spot and color of a dot
(pixel) in the picture doesn't let us know if it's a door, a window, or the roof.
Even if we know its location, we can't tell how tall the house is or whether
the door is open.
Image Analysis
• Image analysis techniques includes following:
• Computation of perceived brightness and color: This is the process of
converting the raw pixel values into a representation of the image
that is more meaningful to humans.
• Partial or complete recovery of three-dimensional data in the
scene: This is the process of extracting the three-dimensional
structure of an object from a two-dimensional image. This can be
done using techniques such as stereo vision or structured light.
Image Analysis
• Location of discontinuities corresponding to objects in the
scene: This is the process of finding the edges and boundaries of
objects in an image. This can be done using techniques such as edge
detection or region growing.
• Characterization of the properties of uniform regions in the
image: This is the process of identifying and describing regions of an
image that have similar properties, such as brightness, color, or
texture. This can be done using techniques such as clustering or
segmentation.
Image Analysis
• Image analysis is important in many areas:
▪ aerial surveillance photographs,
▪ Scan television images of the moon or of planets gathered from space probes,
▪ television images taken from an industrial robot's visual sensor,
▪ X-ray images
• Subareas of image processing include image enhancement, pattern
detection and recognition and scene analysis and computer vision
Image Analysis
• Image enhancement deals with improving image quality by
eliminating noise or by enhancing contrast.
• Pattern detection and recognition deal with detecting and clarifying
standard patterns and finding distortions from these patterns.
• A particularly important example is Optical Character Recognition
(OCR) technology, which allows for the economical bulk input of
pages of typeset, typewritten or even hand-printed characters
Image Recognition
• Image recognition is the process of identifying and classifying objects
in images. It's like teaching a computer to "see" and understand
what's in a picture.
• It uses machine vision technologies with artificial intelligence and
trained algorithms to recognize images through a camera system.
• Automotive, e-commerce, retail, manufacturing industries, security,
surveillance, healthcare, farming etc., can have a wide application of
image recognition.
Image Recognition Examples

Object Detection
Face Detection

Image Recognition
Image Recognition
• A recognition methodology must pay substantial attention to each of
the following six steps: image formatting, conditioning, labelling,
grouping, extracting and matching.
Image Formatting
• Image formatting means capturing an image from a camera and
bringing it into a digital form.
• Digital representation of an image in the form of pixels.
Conditioning
• In an image, there are features which are uninteresting, either
because they were introduced into the image during the digitization
process as noise, or because they form part of a background.
• An observed image is composed of informative patterns modified by
uninteresting random variations.
• It suppresses the uninteresting variations in the image, effectively
highlighting the informative patterns.
• It can be applied uniformly to all images, regardless of their content.
Labeling
• Informative patterns in an image have structure. They are composed
of adjacent pixels that share some property, such as the same
intensity or color.
• Patterns can be identified by looking for continuous adjacent pixels
that differ greatly in intensity or color. These pixels are likely to mark
boundaries, between objects, or an object and the background.
• Edge detection techniques focus on identifying these continuous
adjacent pixels. There are many different edge detection techniques
(Canny, Laplacian), but they all work by finding the points in an image
where the intensity or color changes abruptly.
Labeling

Edge Detected Image


Grouping
• Labeling finds primitive objects, such as edges.
• Grouping can turn edges into lines by determining that different
edges belong to the same spatial event.
• The first 3 operations represent the image as a digital image data
structure (pixel information), however, from the grouping operation
the data structure needs also to record the spatial events to which
each pixel belongs.
• This information is stored in a logical data structure.
• A grouping operation, where edges are grouped into lines, is called
line-fitting.
Grouping

Line fitting of image


Extraction
• Grouping only records the spatial event(s) to which pixels belong.
Feature extraction involves generating a list of properties for each set
of pixels in a spatial event.
• These may include a set's centroid, area, orientation, spatial moments
(describe the shape of a region), etc.
• Other properties might depend on whether the group is considered a
region or an arc.
Matching
• Finally, once the pixels in the image have been grouped into objects
and the relationship between the different objects has been
determined, the final step is to recognize the objects in the image.
• Matching involves comparing each object in the image with
previously stored models and determining the best match template
matching.
Image Recognition Visualization
Image Transmission
• Image transmission takes into account transmission of digital images
through computer networks.
• There are several requirements on the networks when images are
transmitted:
1. The network must be able to handle a sudden increase in data traffic, such as
when a large image is being transmitted.
2. Image transmission requires reliable transport;
3. Time-dependence is not a dominant characteristic of the image in contrast to
audio/video transmission. For eg: if you are sending a photo of a birthday
party, it doesn't matter if the photo is received a few seconds late.
Image Transmission
• It is the process of sending an image from one device to another over
a communication channel, such as a network or a wireless
connection.
• Image size depends on the image representation format used for
transmission.
• There are several possibilities:
i. Raw image data transmission
ii. Compressed image data transmission
iii. Symbolic image data transmission
Raw Image Data Transmission
• In this case, the image is generated through a video digitizer and
transmitted in its digital format.
• The size can be computed in the following manner:
Size = spatial resolution x pixel quantization
• Spatial resolution is the number of pixels in the image. 640 x 480 pixels
• Pixel quantization is the number of bits used to represent each pixel. An
image with a pixel quantization of 8 bits can represent 256 different colors
for each pixel.
• For example, the transmission of an image with a resolution of 640 x 480
pixels and pixel quantization of 8 bits per pixel requires transmission of
307,200 bytes through the network.
Compressed Image Data Transmission
• In this case, the image is generated through a video digitizer and
compressed before transmission.
• The reduction of image size depends on the compression method and
compression rate.
Symbolic Image Data Transmission
• In this case, the image is represented through symbolic data
representation as image primitives (e.g., 2D or 3D geometric
representation), attributes and other control information.
• This image representation method is used in computer graphics.
• Image size is equal to the structure size, which carries the transmitted
symbolic information of the image
Assignment
Create a presentation / video / Report that uses images and graphics to
explain a topic of your choice.

You might also like