Quantization: Prof. Pooja M. Bharti IT Department Laxmi Institute of Technology
Quantization: Prof. Pooja M. Bharti IT Department Laxmi Institute of Technology
Prepared by:
Prof. Pooja M. Bharti
IT Department
Laxmi Institute of Technology
Introduction
• In lossless compression, the reconstructed data is
identical to original data. So only limited amount of
compression can be obtained with lossless
compression.
• If resources are limited and we do not want require
absolute integrity, we can improve the amount of
compression by accepting certain degree of loss
during compression.
Cont…
• Performance measure in lossless compression: rate
• Performance measure in lossy compression: rate and
distortion
• Main Goal of lossy compression: to suffer minimum
amount of distortion while compressing to the lowest
possible rate.
• There is tradeoff between minimizing rate and
keeping distortion small.
Distortion criteria
• Distortion is the measure of difference between
original data and reconstructed data.
d(x,y)=(x-y)2
q(x) = round(x)
Scalar Vector
Example: M = 8 R = 3
Cont…
• Example: eight-level quantizer:
y1 1110
1100
y2 100
00
y3 01
101
y4 1101
1111
y5
y6
y7
y8
Cont…
• For variable-length coding, the rate will depend
on the probability of occurrence of the outputs.
• Variable-length coding
If li is the length of the codeword corresponding to
the output yi, and the probability of occurrence of
yi is:
Distortion-optimized quantization
Given: Rate constraint R ≤ R*
Find: { bi }, { yi }, binary codes
Such that: q2 is minimized
Uniform Quantizer
All intervals are of the same size
Boundaries are evenly spaced (step size:∆), except for out- most
intervals
Reconstruction
Usually the midpoint is selected as the representing value
Quantizer types:
Midrise quantizer: zero is not an output level so even number of
reconstruction levels
Midtread quantizer: zero is an output level so odd number of
reconstruction levels
D= ∆2/12
Midrise vs. Midtread Quantizer
Midrise Midtread
Adaptive Quantization
We can adapt the quantizer to the statistics of the input
(mean, variance, pdf)
Forward adaptive (encoder-side analysis)
Divide input source in blocks
Analyze block statistics
Set quantization scheme
Send the scheme to the decoder via side channel
Backward adaptive (decoder-side analysis)
Adaptation based on quantizer output only
Adjust accordingly (encoder-decoder in sync)
No side channel necessary
Forward Adaptive Quantization (FAQ)
Choosing analysis block size is a major issue
Block size too large
Processing delay
More storage requirement
Block size too small
More side channel information
Backward Adaptive Quantization (BAQ)
Key idea: only encoder sees input source, if we do not
want to use side channel to tell the decoder how to
adapt the quantizer, we can only use quantized output
to adapt the quantizer
Possible solution:
Observe the number of output values that falls in outer levels and
inner levels
If they match the assumed pdf, is good
If too many values fall in outer levels, should be enlarged,
otherwise, should be reduced
Issue: estimation of pdf requires large observations
Cont…
Non-uniform Quantization
For uniform quantizer,
decision boundaries
are determined by a
single parameter .
We can certainly reduce
quantization errors
further if each decision
boundaries can be
selected freely
pdf-optimized Quantization
Given fX(x), we can try to minimize MSQE:
Continue the process until all {bj} and {yj} are found
Cont…
If the initial guess of y1 does not fulfills the termination
condition:
where