0% found this document useful (0 votes)
23 views

DIP S8ECE MODULE5 Part1

The document discusses image coding and compression techniques. It explains the need for compact representation of digital images due to large file sizes. The goals of image compression are to reduce storage requirements and increase transmission rates by removing redundant data. It describes different types of redundancy in images like coding, spatial, and psychovisual redundancy. Lossless compression techniques like Huffman coding aim to remove coding redundancy while lossy techniques use quantization to remove redundant information.

Uploaded by

Neeraja John
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

DIP S8ECE MODULE5 Part1

The document discusses image coding and compression techniques. It explains the need for compact representation of digital images due to large file sizes. The goals of image compression are to reduce storage requirements and increase transmission rates by removing redundant data. It describes different types of redundancy in images like coding, spatial, and psychovisual redundancy. Lossless compression techniques like Huffman coding aim to remove coding redundancy while lossy techniques use quantization to remove redundant information.

Uploaded by

Neeraja John
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

DIGITAL IMAGE PROCESSING

(ELECTIVE - III) LA806-5


MODULE 5
Prepared by
AKAS G KAMAL
Asst. Professor
ECE Department
Amaljyothi College of Engineering
Image Coding &
Compression-
• basic principles –
• run length coding –
• variable length coding –
• bit plane coding –
• predictive coding –
– loss-less and lossy
• Transform coding –
• Image compression standards.
Need for compact image
representation

Consider the amount of data required to


represent a two hour standard definition
television
One color video frame=720 x 480 x8 x3 bit pixels

Video players =30 frames per second


Two hour movie =720x480x3x30x2x60x60=2.24x1011 Bytes

=224GB =320 CDs=48 DVDs


Need for compact image
representation

High definition
1920x1080x24 bits
How many CDs?

1920
Need for compact image
representation

Time required to sent a small 128 x128 x 24


bit full color image through 56 kbps internet

7s
Goal of Image Compression
– The goal of image compression is to reduce the
amount of data required to represent a digital
image

– Reduce storage requirements and increase


transmission rates.
Data compression

• The term data compression refers to


the process of reducing the amount
of data required to represent a given
quantity of information
Data ≠ Information

Data and information are not synonymous


terms!

Data is the means by which


information is conveyed.

Data compression aims to reduce the amount


of data required to represent a given
quantity of information while preserving as
much information as possible.
Data vs Information (cont’d)

• The same amount of information can


be represented by various amount of
data, e.g.:
Ex1: Your wife, Helen, will meet you at Logan Airport in
Boston at 5 minutes past 6:00 pm tomorrow night

Ex2: Your wife will meet you at Logan Airport at 5 minutes


past 6:00 pm tomorrow night

Ex3: Helen will meet you at Logan at 6:00 pm tomorrow night


Information vs Data

REDUNDANTDATA

INFORMATION

DATA = INFORMATION + REDUNDANT DATA


Data Redundancy

compression

Compression ratio:
Data Redundancy (cont’d)

• Relative data redundancy:


Example:
Approaches

• Lossless
– Information preserving
– Low compression ratios

• Lossy
– Not information preserving
– High compression ratios

• Trade-off: image quality vs compression ratio


Types of Data Redundancy

(1) Coding
(2) spatial and temporal( Interpixel)
(3) irrelevant (Psychovisual )

• Compression attempts to reduce one


or more of these redundancy types.
Types of Data Redundancy
Coding Redundancy
Coding Redundancy
• Code: a list of symbols (letters,
numbers, bits etc.)
• Code word: a sequence of symbols used
to represent a piece of information or
an event (e.g., gray levels).
• Code word length: number of symbols in
each code word
Coding Redundancy (cont’d)

N x M image
rk: k-th gray level Expected value:

P(rk): probability of r k E ( X )   xP ( X  x)
l(rk): no. of bits for rk x
Coding Redundancy (con’d)

• l(rk) = constant length

Example:
Coding Redundancy (cont’d)

• l(rk) = variable length


• Consider the probability of the gray
levels:
variable length
Spatial and temporal (Interpixel)
redundancy

Spatial redundancy ,
geometric redundancy and
interframe redundancy

• Interpixel redundancy
implies
All 256that any
intensities are pixel value
equally probable
can be reasonably
predicted by its neighbors
(i.e., correlated).
Spatial and temporal redundancy

• Spatial redundancy can be eliminated


by representing image as a sequence
of run length pairs,
• where each run length pairs specifies
the start of a new intensity and the
number of consecutive pixels that
have that intensities
Spatial and temporal (Interpixel)
redundancy
Spatial and temporal (Interpixel)
redundancy

• Interpixel redundancy implies that any pixel value


can be reasonably predicted by its neighbors (i.e.,
correlated).


f ( x) o g ( x)   f ( x) g ( x  a )da


autocorrelation: f(x)=g(x)
Psychovisual redundancy

Certain information has less


relative importance than other
information in normal visual
processing.
This information is said to be
Psychovisually redundant
Psychovisual redundancy

• The human eye does not respond with equal


sensitivity to all visual information.

• It is more sensitive to the lower


frequencies than to the higher frequencies
in the visual spectrum.
Psychovisual redundancy

• Idea: discard data that is perceptually


insignificant!
• It is an irreversible operation (visual
information is lost),
• quantization results in lossy data
compression
Fidelity Criteria

• How close is to ?

• Criteria
– Subjective: based on human observers
– Objective: mathematically defined criteria
Subjective Fidelity Criteria
Objective Fidelity Criteria
The error between two functions is given by:

e( x, y )  f ( x, y )  f ( x, y )
So, the total error between the two images is

M 1 N 1

  [ f ( x , y )  f ( x , y )]
x0 y 0

The root-mean-square error averaged over the


whole image is
Objective Fidelity
Criteria

• Mean-square signal-to-noise ratio


(SNR)
Objective Fidelity
Criteria (cont’d)
RMSE = 5.17 RMSE = 15.67 RMSE = 14.17
Compression Types

Compression

Error-Free Compression Lossy Compression


(Loss-less)

Software Research
Lossless Compression
Image Compression
Model
Image Compression
Model (cont’d)

• Mapper: transforms input data in a way that


facilitates reduction of interpixel redundancies.
Image Compression
Model (cont’d)

• Quantizer: reduces the accuracy of the mapper’s


output in accordance with some pre-established
fidelity criteria.
Image Compression
Model (cont’d)

• Symbol encoder: assigns the shortest code to the


most frequently occurring output values.
Image Compression
Models (cont’d)

• Inverse operations are performed.

• But … quantization is irreversible in general.


Compression Model

The source encoder is responsible for removing redundancy


(coding, inter-pixel, psycho-visual)

The channel encoder ensures robustness against channel noise.

Software Research
Error-Free Compression
• Some applications require no error in
compression (medical, business documents,
etc..)
• CR=2 to 10 can be expected.
• Make use of coding redundancy and inter-
pixel redundancy.
• Ex: Huffman codes, LZW, Arithmetic
coding, 1D and 2D run-length encoding,
Loss-less Predictive Coding, and Bit-Plane
Coding. Software Research
Error-Free Compression
• Variable-Length Coding
• LZW Coding
• Bit-Plane Coding
• Lossless Predictive Coding
Lossy Compression
• Lossy Predictive Coding
• Transform Coding
• Wavelet Coding
Huffman Coding
• The most popular technique for removing coding
redundancy is due to Huffman (1952)

• Huffman Coding yields the smallest number of


code symbols per source symbol

• The resulting code is optimal

Software Research
Huffman Codes
• Forward Pass
1. Sort probabilities per symbol
2. Combine the lowest two probabilities
3. Repeat Step2 until only two probabilities remain.
Huffman Codes
Backward Pass
Assign code symbols going backwards

Lavg  1(0.4)  2(0.3)  3(0.1)  4(0.1)  5(0.06)  5(0.04)


 2.2 bits

Software Research
Decode
Huffman Decoding
• After the code has been created,
coding/decoding can be implemented using a
look-up table.
• Note that decoding is done unambiguously.

You might also like