0% found this document useful (0 votes)
79 views

Quantization: Prof. Pooja M. Bharti IT Department Laxmi Institute of Technology

This document provides an overview of quantization as a lossy compression technique. It defines quantization as mapping a large set of input values to a smaller set of output values. Quantization introduces distortion but allows for greater compression than lossless techniques. The key aspects covered include: - Quantization maps a continuous range of values to a discrete set, introducing distortion but enabling compression. - Performance is measured by rate and distortion, with the goal being minimum distortion at lowest rate. - Types of quantization include scalar and vector. Scalar quantizes individual values while vector quantizes groups of values. - Quantizers have encoder and decoder mappings, with the design impacting compression and distortion. Optimization aims to

Uploaded by

Pooja Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views

Quantization: Prof. Pooja M. Bharti IT Department Laxmi Institute of Technology

This document provides an overview of quantization as a lossy compression technique. It defines quantization as mapping a large set of input values to a smaller set of output values. Quantization introduces distortion but allows for greater compression than lossless techniques. The key aspects covered include: - Quantization maps a continuous range of values to a discrete set, introducing distortion but enabling compression. - Performance is measured by rate and distortion, with the goal being minimum distortion at lowest rate. - Types of quantization include scalar and vector. Scalar quantizes individual values while vector quantizes groups of values. - Quantizers have encoder and decoder mappings, with the design impacting compression and distortion. Optimization aims to

Uploaded by

Pooja Bharti
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 35

Quantization

Prepared by:
Prof. Pooja M. Bharti
IT Department
Laxmi Institute of Technology
Introduction
• In lossless compression, the reconstructed data is
identical to original data. So only limited amount of
compression can be obtained with lossless
compression.
• If resources are limited and we do not want require
absolute integrity, we can improve the amount of
compression by accepting certain degree of loss
during compression.
Cont…
• Performance measure in lossless compression: rate
• Performance measure in lossy compression: rate and
distortion
• Main Goal of lossy compression: to suffer minimum
amount of distortion while compressing to the lowest
possible rate.
• There is tradeoff between minimizing rate and
keeping distortion small.
Distortion criteria
• Distortion is the measure of difference between
original data and reconstructed data.

d(x,y)=(x-y)2

• Here, x is original data and y is reconstructed data


Quantization
• Definition:
– Quantization: a process of representing a large –
possibly infinite – set of values with a much
smaller set.
– Scalar quantization: a mapping of an input value
x into a finite number of output values, y:
Q: x y
• One of the simplest and most general idea in lossy
compression.
Cont…
• An example: any real number x can be rounded off to the
nearest integer, say

q(x) = round(x)

• Maps the real line R (a continuous space) into a discrete space.


• For example: 2.36 is mapped to 2, 3.143 is mapped to 3, etc.
 Source: real numbers in the range [–10.0, 10.0]
 Quantizer: Q(x) = x + 0.5
 [–10.0, –10.0]  { –10, –9, …, –1, 0, 1, 2, …, 9, 10}
Quantization Types

Scalar Vector

• Scalar Quantization: Set of inputs and outputs of a


quantizer are scalar in nature
• Vector Quantization: Set of inputs and outputs of a
quantizer are vector in nature
Quantizer
• The design of the quantizer has a significant impact on
the amount of compression obtained and loss incurred in
a lossy compression scheme.
• Quantizer: encoder mapping and decoder mapping.
– Encoder mapping
– The encoder divides the range of source into a
number of intervals
– Each interval is represented by a distinct codeword
– Decoder mapping
– For each received codeword, the decoder
generates a reconstruction value
Components of Quantizer
• Encoder mapping:
• Divides the range of values that the source generates
into a number of intervals.
• Each interval is then mapped to a codeword. It is a
many-to-one irreversible mapping.
• The code word only identifies the interval, not the
original value. If the source or sample value comes
from an analog source, it is called a A/D converter.
Components of Quantizer
• Decoder Mapping:
• Given the code word, the decoder gives a an
estimated value that the source might have generated.
• Usually, it is the midpoint of the interval but a more
accurate estimate will depend on the distribution of
the values in the interval.
• In estimating the value, the decoder might generate
some errors.
Quantization Examples
 3-bit Quantizer
 Encoder (A/D)  Decoder (D/A)

 Digitizing a sine wave


Quantization Input Output Mapping
 A quantizer describes the relation between the encoder
input values and the decoder output values.
 Example of a quantization function:
Quantization Problem Formulation
 Input:
 X – random variable
 fX(x) – probability density function (pdf)
 Output:
 {bi}i = 0..M decision boundaries
 {yi}i = 1..M reconstruction levels
 If source is unbounded, then the first and the last decision
boundaries = ± (they are often called “saturation” values)
Quantization Error
 If the quantization operation is denoted by Q(·), then
Q(x) = yi iff bi–1 < x  bi.
The mean squared quantization error (MSQE) is then

 Quantization error is also called quantization noise or


quantizer distortion, e.g., additive noise model:
Cont…
• Rate of the quantizer
The average number of bits required to represent a
single quantizer output
• For fixed-length coding, the rate R is:
R = log2M
where, M is number of reconstruction levels

Example: M = 8  R = 3
Cont…
• Example: eight-level quantizer:
y1 1110
1100
y2 100
00
y3 01
101
y4 1101
1111

y5

y6

y7

y8
Cont…
• For variable-length coding, the rate will depend
on the probability of occurrence of the outputs.
• Variable-length coding
If li is the length of the codeword corresponding to
the output yi, and the probability of occurrence of
yi is:

• The rate is given by:


Quantizer design problem
• Fixed -length coding
 Given an input pdf fX(x) and the number of levels M in the
quantizer, find the decision boundaries {bi} and the
reconstruction levels {yi} so as to minimize the mean squared
quantization error
Optimization of Quantization
 Rate-optimized quantization
 Given: Distortion constraint q2 ≤ D*

 Find: { bi }, { yi }, binary codes


 Such that: R is minimized

 Distortion-optimized quantization
 Given: Rate constraint R ≤ R*
 Find: { bi }, { yi }, binary codes
 Such that: q2 is minimized
Uniform Quantizer
 All intervals are of the same size
 Boundaries are evenly spaced (step size:∆), except for out- most
intervals
 Reconstruction
 Usually the midpoint is selected as the representing value
 Quantizer types:
 Midrise quantizer: zero is not an output level so even number of
reconstruction levels
 Midtread quantizer: zero is an output level so odd number of
reconstruction levels

D= ∆2/12
Midrise vs. Midtread Quantizer
 Midrise Midtread
Adaptive Quantization
 We can adapt the quantizer to the statistics of the input
(mean, variance, pdf)
 Forward adaptive (encoder-side analysis)
 Divide input source in blocks
 Analyze block statistics
 Set quantization scheme
 Send the scheme to the decoder via side channel
 Backward adaptive (decoder-side analysis)
 Adaptation based on quantizer output only
 Adjust  accordingly (encoder-decoder in sync)
 No side channel necessary
Forward Adaptive Quantization (FAQ)
 Choosing analysis block size is a major issue
 Block size too large
 Processing delay
 More storage requirement
 Block size too small
 More side channel information
Backward Adaptive Quantization (BAQ)
 Key idea: only encoder sees input source, if we do not
want to use side channel to tell the decoder how to
adapt the quantizer, we can only use quantized output
to adapt the quantizer
 Possible solution:
 Observe the number of output values that falls in outer levels and
inner levels
 If they match the assumed pdf,  is good
 If too many values fall in outer levels,  should be enlarged,
otherwise,  should be reduced
 Issue: estimation of pdf requires large observations
Cont…
Non-uniform Quantization
 For uniform quantizer,
decision boundaries
are determined by a
single parameter .
 We can certainly reduce
quantization errors
further if each decision
boundaries can be
selected freely
pdf-optimized Quantization
 Given fX(x), we can try to minimize MSQE:

 Set derivative of q2 w.r.t. yj to zero and solve for yj, we


have:

If yj are determined, the bj’s can be selected as:


b j  y j 1  y j / 2.
Lloyd-Max Algorithm
 Lloyd-Max algorithm solves yj and bj iteratively until an
acceptable solution is found

 Example: For midrise quantizer, b0 = 0,


bM/2 is the largest input, we only have
to find
• { b1, b2, …, bM/2–1} and
• { y1, y2, …, yM/2–1}.
Cont…
 Begin with j = 1, we want to find b1 and y1 by

 Pick a value for y1 (e.g. y1 = 1), solve for b1 and compute y2


by
y2 = 2b1 + y1,
and b2 by

 Continue the process until all {bj} and {yj} are found
Cont…
 If the initial guess of y1 does not fulfills the termination
condition:

where

we must pick a different y1 and repeat the process.


Mismatch Effects
 Non-uniform quantizers also suffer mismatch effects.
 To reduce the effect, one can use an adaptive non-
uniform quantizer, or an adaptive uniform quantizer plus
companded quantization techniques
• Variance mismatch on a 4-bit Laplacian non-uniform quantizer.
Companded Quantization (CQ)
 In companded quantization, we adjust (i.e. re-scale) the
intervals so that the size of each interval is in proportion
to the probability of inputs in each interval

equivalent to a non-uniform quantizer


Example: CQ (1/2)
 The compressor function:

 The uniform quantizer:


step size  = 1.0
Example: CQ (2/2)
 The expander function:

 The equivalent non-uniform


quantizer
Thank You

You might also like