0% found this document useful (0 votes)
34 views49 pages

DIP - Lecture 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views49 pages

DIP - Lecture 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Digital Image Processing

ITU 08207 & CSU 08202


Course Objectives
q At the end of the course, the student is expected to be able to:-
○ Describe, understand, and Apply the concepts of Acquisition, Storing, Processing,
and Presenting Digital Images
○ Describe, understand, and Apply Digital Image Transformations and Spatial
Filtering (Manipulating the pixel values of an image based on their location)
○ Understand Digital Image restoration(repairing or enhancing an image) and
reconstruction (recovering damaged information in an image) concepts
○ Use MATLAB to perform Digital Image Transformation and Arithmetic Operations
○ Apply Image Restoration, Morphological Image Processing, and Segmentation of
Digital Images
○ Apply Image Compressions and Wavelength Transformations in Multimedia

2
Department of Computing & Communication Technology
Assessment
q The assessment criteria of the course will be as follows
○ For Continuous Assessment (Carrying 40 Marks)
■ There will be two Individual assignments each carrying 5 Marks (Total = 10 Marks)
■ There will be One group Assignment carrying 10 Marks (Total = 10 Marks)
■ Two tests, each test carrying 10 marks (Total = 20 Marks)
○ For Semester Examination (Carrying 60 Marks)
■ The Semester Exam will consist of two Sections ( Section A=40%, and Section
B=60%)
■ Section A will have TWO Questions and Section B will have THREE Questions

3
Department of Computing & Communication Technology
References
1. Milan Sonka, Vaclav Hlavac, and Roger Boyle, “Image Processing, Analysis,
and Machine Vision”, Second Edition, Thomson Learning, 2001
2. Richard O. Duda, Peter E. HOF, David G. Stork, “Pattern Classification”
Wiley Student Edition, 2006
3. Sanjit K. Mitra, & Giovanni L. Sicuranza, “Non Linear Image Processing”,
Elsevier, 2007

RECOMMENDED:
1. Anil K.Jain, “Fundamentals of Digital Image Processing”, PHI, 2006
2. Digital Image Processing 3rd Edition by Rafael C. Gonzalez and Richard E.
Woods

4
Department of Computing & Communication Technology
Lecture Outline

q Introduction
q Application Areas of DIP
q Steps in Image Processing System
q Image Acquisition Processes
q Sampling and Quantization Processes

5
Department of Computing & Communication Technology
Introduction

q What is Digital Image Processing?

○ Digital Image Processing (DIP) refers to the processing of digital images by the
use of digital computers
○ OR – Is the process of using computer algorithms and mathematical models
to process and analyze digital images
○ OR - is the process of transforming an image into a digital form and performing
certain operations to get some useful information from it

q The goal of digital image processing is to enhance the quality of images,


extract meaningful and useful information from the images, and automate
image-based tasks

6
Department of Computing & Communication Technology
Introduction
q What is an Image?

○ Image may be defined as a two-dimensional function, 𝑓(𝑥, 𝑦), where 𝑥 and 𝑦 are
spatial (plane) coordinates, and the amplitude of 𝑓 at any pair of coordinates (𝑥, 𝑦)
is called the intensity or gray level of the image at that point
○ When 𝑥, 𝑦, and the intensity values of 𝑓 are all finite and discrete quantities, we call
the image a digital image
○ "intensity" refers to the brightness of a pixel in an Image (Representing the amount
of light or color information contained in a specific pixel)
○ The grayscale value of each pixel typically represents the intensity each pixel in an
image, ranging from 0 (black) to 255 (white)
○ RGB - A pixel is made up of 3 integers between 0 to 255 (the integers represent the
intensity of red, green, and blue)

7
Department of Computing & Communication Technology
Introduction

q What is an Image?
○ The digital image comprises a finite number of elements, each with a
particular location and value
○ These elements are called picture elements, image elements, or pixels
○ Pixel is the term used most widely to denote the elements of digital image
q An image is a two-dimensional function that represents a measure of
some characteristic such as the brightness or color of a viewed scene
q An image is a projection of a 3-D scene into a 2-D projection plane

8
Department of Computing & Communication Technology
Introduction

q An image is a projection of a 3-D scene into a 2-D projection plane

9
Department of Computing & Communication Technology
Origin of Digital Image Processing

Self Reading, Chapter 1, section 1.2, Digital Image Processing 3rd Edition by
Rafael C. Gonzalez and Richard E. Woods

10
Department of Computing & Communication Technology
Application Areas of Digital Image Processing

q Image processing finds extensive applications in various fields


q Three common areas where image processing is used significantly are:-
○ Medical Imaging: Image processing techniques are widely employed in
medical imaging for diagnostics, treatment planning, and research
■ Medical imaging modalities such as X-ray, Magnetic Resonance Imaging (MRI),
Computed Tomography (CT) scans, and ultrasound generate large volumes of
image data
■ Digital image processing algorithms are employed to reconstruct high-resolution
images from the acquired data, remove noise, allow for better visualization of
internal body structures and organs, extract features, and detect abnormalities
■ These techniques aid in the detection and diagnosis of diseases, surgical
planning, monitoring treatment progress, and medical research

11
Department of Computing & Communication Technology
Application Areas of Digital Image Processing

q Three common areas where image processing is used significantly are:-


○ Medical Imaging: Some important applications of DIP in Medical Imaging include
■ Image Enhancement: Digital image processing techniques are used to enhance the
quality of medical images acquired from various modalities such as X-ray, MRI, CT,
ultrasound, etc.
● Techniques like contrast enhancement, noise reduction, and edge sharpening can
improve the visual clarity of images, making it easier for healthcare professionals to
analyze and interpret images accurately
■ Image Reconstruction: In some medical imaging modalities like CT Scan and MRI,
data obtained from the scanning process needs to be processed and reconstructed
into meaningful images
● Digital image processing algorithms are used to reconstruct high-resolution images
from the acquired data, allowing for better visualization of internal body structures

12
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Medical Imaging: Some important applications of DIP in Medical Imaging
include
■ Image Segmentation: Segmentation is the process of partitioning an image into
different regions based on their characteristics
● In medical imaging, image segmentation is used for identifying and portraying
specific body structures or regions of interest
● This information is crucial for diagnosis, treatment planning, and monitoring of
diseases
● Digital image processing techniques like thresholding, edge detection, region
growing, and clustering algorithms are commonly used for image segmentation

13
Department of Computing & Communication Technology
Application Areas of Digital Image Processing

q Three common areas where image processing is used significantly are:-


○ Medical Imaging: Some important applications of DIP in Medical Imaging include
■ Image Registration: Image registration involves aligning and overlaying multiple
medical images of the same patient or different patients taken at different times
or using different imaging modalities
● It helps in comparing images and tracking changes over time
● Digital image processing algorithms are used to extract features and perform
geometric transformations to achieve accurate image registration
■ Image Analysis and Quantification: Digital image processing techniques enable
quantitative analysis of medical images to extract measurements and features
● It aids in quantifying body structures, identifying tumors, measuring wound sizes,
tracking disease progression, and assessing treatment response
● This information assists healthcare professionals in making accurate diagnoses
and treatment decisions

14
Department of Computing & Communication Technology
Application Areas of Digital Image Processing

q Three common areas where image processing is used significantly are:-


○ Computer Vision: Computer vision is an area of study that focuses on enabling
computers to understand and interpret visual information from digital images
or videos
○ It involves developing algorithms and techniques that are used to extract
meaningful information from visual data, similar to how humans perceive and
interpret the visual world
■ Image processing plays a fundamental role in computer vision applications, such as
object recognition, image classification, object tracking, facial recognition,
autonomous vehicles, and robotics
■ Image processing algorithms are used to extract relevant features, recognize
patterns, and analyze visual data to enable machines to perceive and interpret their
surroundings

15
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Computer Vision: Some important applications of DIP in Computer Vision are
■ Image Preprocessing: Digital image processing techniques are used to pre-process
and enhance images before applying computer vision algorithms
● This may involve tasks such as noise reduction, contrast enhancement, image
normalization, and image resizing
● Preprocessing helps improve the quality of images and making the computer vision
tasks more accurate and reliable

■ Feature Extraction: Digital image processing techniques are employed to extract


relevant features and information from images
● These features may include edges, corners, textures, or other visual descriptors
● Feature extraction enables the representation of images in a form that is suitable
for further analysis and understanding by computer vision algorithms

16
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Computer Vision: Some important applications of DIP in Computer Vision are
■ Image Segmentation: Image segmentation is the process of partitioning an
image into distinct regions or objects
● Digital image processing techniques, such as clustering (or image grouping – The processing of
grouping images based on their visual similarities) and thresholding (the process of
segmenting images into regions based on their pixel intensity values, e.g., separating
foreground image from background image) are used for image segmentation
● It helps in identifying and separating different objects or regions of interest within an image,
enabling more precise analysis and understanding

■ Image Understanding and Analysis: Digital image processing techniques enable


higher-level analysis and understanding of images in computer vision
● This includes tasks such as object tracking, scene understanding, and image
classification
● By leveraging digital image processing algorithms, computer vision systems can
interpret and comprehend visual information, leading to intelligent decision-making
and automation

17
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Surveillance & Security: Image processing is extensively used in surveillance
and security systems
■ These systems employ video cameras to capture and process images or videos for
various purposes, including object detection, tracking, face recognition, and
behavior analysis
■ Image processing algorithms can identify and track objects of interest, detect
suspicious activities or anomalies, and provide automated surveillance solutions
■ These technologies are employed in areas like traffic monitoring, access control,
and video surveillance for crime prevention

18
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Surveillance & Security: Some applications of Digital Image Processing in Surveillance
and Security Systems include:-
■ Object Detection and Tracking: Digital image processing techniques are used to detect and
track objects of interest in surveillance videos or images
● Object detection algorithms can identify and locate specific objects, such as persons of
Interest or vehicles
● Object tracking algorithms can follow the movement of these objects over time,
providing valuable information for security monitoring and analysis

■ Intrusion Detection: Digital image processing can be employed to detect unauthorized


intrusions or abnormal activities in surveillance systems
● By analyzing video streams or images, algorithms can identify suspicious behaviors,
such as people climbing fences, or entering restricted areas
● This helps in real-time threat detection and alerts security personnel to potential
security breaches

19
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Surveillance & Security: Some applications of Digital Image Processing in
Surveillance and Security Systems include:-
■ Facial Recognition: Facial recognition is a specific application of digital image
processing used in surveillance and security systems
● It involves identifying and verifying individuals based on their facial features
● Facial recognition algorithms can compare captured faces with a database of
known individuals or identify unknown individuals based on facial characteristics
● This technology is used for access control, identity verification, and forensic
investigations

■ License Plate Recognition: License plate recognition (LPR) systems utilize digital
image processing techniques to identify and extract license plate information from
surveillance images or video streams
● LPR algorithms can read and interpret license plate numbers, enabling automated
vehicle tracking, parking management, and law enforcement applications

20
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Three common areas where image processing is used significantly are:-
○ Surveillance & Security: Some applications of Digital Image Processing in
Surveillance and Security Systems include:-
■ Image and Video Forensics: Digital image processing techniques are applied in the
forensic analysis of surveillance images or videos
● These techniques help in enhancing image quality, recovering details from low-
resolution or noisy footage, and extracting relevant information for investigations
● Digital forensics can assist in identifying suspects, analyzing crime scenes, and
providing evidence for legal proceedings

■ Perimeter Security: Digital image processing can be used to enhance perimeter


security systems
● Algorithms can monitor and analyze video feeds from surveillance cameras placed
along the perimeter of a facility or property
● They can detect and alert security personnel about unauthorized intrusions,
perimeter breaches, or suspicious activities near the premises
21
Department of Computing & Communication Technology
Application Areas of Digital Image Processing
q Digital image processing has other applications beyond these three areas
q It is also used in
○ Remote sensing,
○ Satellite imagery analysis,
○ Industrial inspection,
○ Quality control,
○ Entertainment and gaming,
○ Image and video editing,
○ Biometrics, and more
q The usefulness of image processing techniques makes them valuable in a
wide range of domains where visual information needs to be analyzed,
interpreted, and utilized

22
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Digital Image processing has different phases/steps in processing a digital
image
○ Image Acquisition: This phase involves capturing or obtaining the digital image
using various devices such as cameras, scanners, or sensors
■ It involves converting the physical image into a digital representation, the image
acquisition stage involves preprocessing, such as scaling
■ Preprocessing: Preprocessing is performed to enhance the quality of the acquired
image and remove any noise that may have been introduced during image
acquisition
■ Common preprocessing techniques include noise reduction, image resizing, and
color correction

23
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Digital Image processing has different phases/steps in processing a digital
image
○ Image Enhancement: This is the process of manipulating an image so that the
result is more suitable than the original for a specific application.
■ Image enhancement techniques are used to improve the visual quality or
interpretability of an image
■ This step aims to highlight important features, enhance contrast, and improve the
overall appearance of the image
○ Image restoration: is an area that also deals with improving the appearance of
an image, however, unlike enhancement, which is subjective, image restoration
is objective, in the sense that restoration techniques are based on
mathematical or probabilistic models of image degradation
■ Enhancement, on the other hand, is based on human subjective preferences
regarding what constitutes a “good” enhancement result

24
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Digital Image processing has different phases/steps in processing a digital
image
○ Image restoration:
■ Restoration techniques are used to recover an image from degradation caused by
factors such as noise, blurring, or compression
■ Restoration techniques reverse the degradation effects and restore the image to its
original state as much as possible

25
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Digital Image processing has different phases/steps in processing a digital
image
○ Color Image Processing: If the image is a color image, additional processing
steps are required to manipulate and analyze color information
■ Color image processing techniques involve operations such as color space
conversion, color correction, color segmentation, and color-based feature
extraction
○ Compression: as the name implies, deals with techniques for reducing the
storage required to save an image, or the bandwidth required to transmit it
■ Various compression algorithms are employed to remove redundant or irrelevant
data from the image while preserving important visual information
■ Common image compression methods include the jpg file extension used in the
JPEG (Joint Photographic Experts Group) image compression standard

26
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Digital Image processing has different phases/steps in processing a digital
image
○ Segmentation: Segmentation involves dividing the image into meaningful
regions or objects
■ This step is useful for object recognition, tracking, or analysis
■ Segmentation techniques can be based on properties such as color, texture, intensity, or
shape
■ Segmentation is one of the most difficult tasks in digital image processing, whereby weak
segmentation algorithms almost always guarantee failure, and the more accurate the
segmentation, the more likely object recognition is to succeed

○ Feature Extraction: Feature extraction involves extracting relevant features or


characteristics from the image
■ These features can capture important information for image analysis or classification tasks
■ Examples of features include edges, corners, or texture patterns

27
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing

q Digital Image processing has different phases/steps in processing a


digital image
○ Recognition: is the process that assigns a label (e.g., “vehicle”) to an object
based on its descriptors
■ Recognition techniques are applied to locate and identify specific objects or
patterns within an image
■ These techniques can be based on various methods such as template
matching, machine learning, or deep learning algorithms

28
Department of Computing & Communication Technology
Fundamental Steps in Digital Image Processing
q Steps

29
Department of Computing & Communication Technology
Components of an Image Processing System
q Components

30
Department of Computing & Communication Technology
Components of an Image Processing System
q Image Sensors:
○ Image sensors sense the intensity, amplitude, co-ordinates and other features
of the images and pass the result to the image processing hardware
○ In sensing, two elements are required to acquire digital images
○ The first is a physical sensing device that is sensitive to the energy radiated by
the object we wish to image (Sensors)
○ The second is called a digitizer is a device for converting the output of the
physical sensing device into digital form
■ For instance, in a digital video camera, the sensors produce an electrical output
proportional to light intensity
■ The digitizer converts these outputs to digital data

31
Department of Computing & Communication Technology
Components of an Image Processing System
q Specialized Image Processing Hardware:
○ Consists of the digitizer and hardware that performs other operations such as
an arithmetic logic unit (ALU), that performs arithmetic such as additions and
subtractions and logical operations in parallel on entire images
■ This unit performs functions that require fast data throughputs (e.g., digitizing and
averaging video images at 30 frames/s) that the typical normal computer cannot
handle
q Computer:
○ It is a general-purpose computer and can range from a PC to a supercomputer
depending on the application
○ In dedicated applications, sometimes a specially designed computer is used to
achieve the required level of performance

32
Department of Computing & Communication Technology
Components of an Image Processing System
q Software:
○ Consists of specialized modules that perform specific tasks
○ A well-designed package also includes the capability for the user to write code
that utilizes the specialized image-processing modules
q Hardcopy:
○ The devices for recording images include laser printers, film cameras, inkjet
units, and digital units such as optical and CDROM disks
○ The film provides the highest possible resolution, but paper is the obvious
medium of choice for written material
○ For presentations, images are displayed on a digital medium if image projection
equipment is used

33
Department of Computing & Communication Technology
Components of an Image Processing System
q Other components are
○ Image Display
○ Mass Storage
○ Networking

34
Department of Computing & Communication Technology
Image Acquisition Process
q Image acquisition: is the process of capturing and collecting visual
information from the real world and converting them into digital images
that can be processed and analyzed by computers or other devices
q Several techniques are employed in image acquisition to capture and
convert images
q Digital Cameras: Digital cameras are widely used for image acquisition
processes
○ They have an image sensor such as a Charge-Coupled Device (CCD) or
Complementary-Metal-Oxide-Semiconductor (CMOS) sensor that captures the
incoming light and converts it into an electrical analog signal
○ The analog signal is then digitized using an Analog-to-Digital Converter (ADC),
producing a digital representation of the image

35
Department of Computing & Communication Technology
Image Acquisition Process
q Techniques employed in image acquisition to capture and convert images
q Scanners: Scanners are used to convert physical documents or images into
digital format
○ They use a light source that illuminates (lightens) the document, and the
reflected light is captured by a sensor
○ The sensor converts the light into an electrical analog signal, which is then
digitized by ADC
q X-ray Imaging: X-ray imaging is commonly used in medical and security
applications
○ X-ray machines emit X-rays that pass through the object being imaged, and a
detector captures the transmitted X-rays
○ The captured data is converted into digital images, revealing the internal
structures or objects

36
Department of Computing & Communication Technology
Image Acquisition Process
q Techniques employed in image acquisition to capture and convert images
q Digital Image Sensors: Integrated into various devices, such as
smartphones, tablets, and digital cameras, to convert light into digital image
data
q Sonar and Ultrasound: Sonar and ultrasound techniques involve emitting
high-frequency sound waves and capturing the reflected waves to create
images
○ Sonar is used in underwater imaging, while ultrasound is used in medical
imaging
○ The captured data is processed to generate digital images

37
Department of Computing & Communication Technology
Image Acquisition Process
q IN GENERAL: Most of the images are generated by the combination of an
“illumination” source and the reflection or absorption of energy from that
source by the elements of the “scene” being imaged
○ The illumination source, also known as the light source, is a fundamental
component in imaging systems that provides the necessary illumination for
capturing images
q The illumination may originate from a source of electromagnetic energy
such as radar, infrared, or X-ray system, ultrasound, or even a computer-
generated illumination pattern
q Similarly, the scene elements could be familiar objects, but they can just as
easily be molecules, buried rock formations, or a human brain

38
Department of Computing & Communication Technology
Image Acquisition Process
q Depending on the nature of the source, illumination energy is reflected
from, or transmitted through objects
○ An example of reflection is when light is reflected from a planar surface
○ An example of transmission is when X-rays pass through a patient’s body to
generate a diagnostic X-ray film
q In some applications, the reflected or transmitted energy is focused onto a
photoconverter which converts the energy into visible light
○ Electron microscopy and some applications of gamma imaging use this
approach

39
Department of Computing & Communication Technology
Image Acquisition Process
q There are three principal sensor arrangements used to transform
illumination (light) energy into digital images
○ The idea is that incoming energy is converted into a voltage by the combination
of input electrical power and sensor material that is responsive to the
particular type of energy being detected (The sensor converts the optical
information into an electrical analog signal)
○ The captured image is in analog form, to process it digitally, an analog-to-digital
converter (ADC) is used to convert the analog signal into a digital
representation
○ The ADC maps the output voltage waveform intensity values of each pixel to
their corresponding digital values based on the quantization levels
○ From this process, a digital quantity is obtained from each sensor by digitizing
its response

40
Department of Computing & Communication Technology
Image Acquisition Process
q Self-Reading (Chapter 2, Section 2.3.1, 2.3.2, & 2.3.3) Sensor Array
○ Image Acquisition using Single Sensor
○ Image Acquisition using Sensor Strips
○ Image Acquisition using Sensor Array

Single Imaging Sensor

Sensor Strip

41
Department of Computing & Communication Technology
Simple Image Formation Model
q A simple image formation model describes the basic process by which an
image is formed
q The Image is formed based on the interaction of light with objects in a
scene
q It involves three main components: (Illumination, Reflection, and Imaging)
○ Illumination
■ The image formation process begins with an illumination source that provides light
that illuminates the scene (object to be imaged)
■ The illumination can be natural, such as sunlight, or artificial, such as a lamp
■ The intensity, direction, and color of the light source influence how the scene is
perceived and captured

42
Department of Computing & Communication Technology
Simple Image Formation Model
q It involves three main components: (Illumination, Reflection, and Imaging)
○ Reflection
■ When the light from the illumination source interacts with objects in the scene, it undergoes
reflection
■ Objects can reflect light in different ways depending on their surface properties
■ The reflection can be diffuse where light scatters uniformly in all directions, or specular
where light reflects at a specific angle like a mirror
■ The reflective properties of objects determine how they appear in the captured image

○ Imaging
■ After the light interacts with the objects in the scene, it enters an imaging system, such as a
camera or an optical sensor
■ The imaging system captures the light and converts it into an image
■ The image formation process involves the lens focusing the light onto a photosensitive
surface, such as a digital sensor or a film, which records the intensity and color information
of the light at different points
■ This recorded information forms the image

43
Department of Computing & Communication Technology
Simple Image Formation Model
q An image is defined by the two-dimensional (2-D) function 𝑓(𝑥, 𝑦)
q The value or amplitude of “𝑓” at a spatial coordinates 𝑥, 𝑦 is a positive
scalar quantity
q When an image is generated from a physical process, its intensity
(concentration) values are proportional to energy radiated by a physical
source (e.g., electromagnetic waves)
q As a consequence 𝑓 𝑥, 𝑦 must be non-zero and finite

q The function may be characterized by two components:


○ The amount of source illumination incident on the scene being viewed, and
○ The amount of illumination reflected by the objects in the scene

44
Department of Computing & Communication Technology
Simple Image Formation Model
q The two components are called the illumination and reflectance
components and are denoted by 𝑖 𝑥, 𝑦 and 𝑟(𝑥, 𝑦) respectively
q The two functions combined as a product to form 𝑓(𝑥, 𝑦)

The nature of 𝑖(𝑥, 𝑦) is determined


by the illumination source, and
where 𝑟(𝑥, 𝑦) is determined by the
characteristics of the imaged objects
and

q 𝑟 𝑥, 𝑦 = 0 means total absorption and 𝑟 𝑥, 𝑦 = 1 means total reflectance

45
Department of Computing & Communication Technology
Simple Image Formation Model
q The intensity of a monochrome image at any coordinates (𝑥! , 𝑦! ) is called
the grey level (𝑙) of the Image at that point, that is

𝑙 = 𝑓(𝑥! , 𝑦! )
q The grey level value lies in the range 𝐿"#$ ≤ 𝑙 ≤ 𝐿"%& where 𝐿"#$ is
positive and 𝐿"%& is finite
q The interval [𝐿"#$ , 𝐿"%& ] is called the gray (or intensity) scale
q Common practice is to shift the interval numerically to the interval [0, 𝐿 −
1], where “𝑙 = 0” is considered black and “𝑙 = 𝐿 − 1” is considered white
on the gray scale
q All intermediate values are shades of gray varying from black to white

46
Department of Computing & Communication Technology
Image Sampling and Quantization
q It is observed that there are numerous ways to acquire images, but the
objective is to generate digital images from sensed data
q The output of most sensors is a continuous voltage waveform
q To create a digital image, we need to convert the continuous sensed data
into digital form
q This involves two processes which are sampling and quantization
q >> Sampling and quantization are the two important processes used to
convert continuous analog image into a digital image
q An image may be continuous with respect to the x- and y-coordinates, and
also in amplitude

47
Department of Computing & Communication Technology
Image Sampling and Quantization
q To convert a continuous Image to digital form, the image function must be
sampled in both coordinates and amplitude
q Given continuous Image 𝑓(𝑥, 𝑦) digitizing the coordinate values is called
Sampling
q Digitizing the amplitude (Intensity) values is called Quantization
q Amplitude is the intensity or brightness of a pixel in a grayscale image or
the color Image

48
Department of Computing & Communication Technology
End of Lecture 1

49
Department of Computing & Communication Technology

You might also like