0% found this document useful (0 votes)
40 views

Sampling and Quantization: - Summary On Sampling - Quantization - Appendix: Notes On Convolution

This document summarizes key concepts related to sampling and quantization of signals: - Sampling converts a continuous-time signal into a discrete-time signal by multiplying it with a comb of Dirac delta functions. The Nyquist theorem states the minimum required sampling rate to avoid aliasing. - Quantization approximates a continuous range of values with a finite set of levels, introducing quantization error. It is used in analog-to-digital conversion. Uniform and high-resolution quantization are discussed. - The effects of downsampling and upsampling on spatial resolution and aliasing are explained. Interpolation methods for upsampling like nearest neighbor, bilinear, and bicubic are covered. - Dist

Uploaded by

api-251901021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

Sampling and Quantization: - Summary On Sampling - Quantization - Appendix: Notes On Convolution

This document summarizes key concepts related to sampling and quantization of signals: - Sampling converts a continuous-time signal into a discrete-time signal by multiplying it with a comb of Dirac delta functions. The Nyquist theorem states the minimum required sampling rate to avoid aliasing. - Quantization approximates a continuous range of values with a finite set of levels, introducing quantization error. It is used in analog-to-digital conversion. Uniform and high-resolution quantization are discussed. - The effects of downsampling and upsampling on spatial resolution and aliasing are explained. Interpolation methods for upsampling like nearest neighbor, bilinear, and bicubic are covered. - Dist

Uploaded by

api-251901021
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Sampling and Quantization

Summary on sampling Quantization Appendix: notes on convolution

Sampling in 1D
Continuous time signal f(t)

t Discrete time signal

f [ k ] = f (kTs ) = f (t ) ( t kTs )
k

f(t)

comb

k
2

Nyquist theorem (1D)

At least 2 sample/period are needed to represent a periodic signal

Ts

1 2 2 max 2 2max Ts
3

s =

Delta pulse

Dirac brush

Comb

Brush

Nyquist theorem
2D spatial domain

Sampling in p-dimensions

Tsx Tsy

sT ( x ) =

fT ( x ) = f ( x ) sT ( x )
2D Fourier domain

kZ p

( x kT )

Nyquist theorem

xs 2 x max s y 2 y max

1 s Tx 2 2 x max 1 Tys 2 2 y max

ymax xmax x

Spatial aliasing

Resampling
Change of the sampling rate
Increase of sampling rate: Interpolation or upsampling
Blurring, low visual resolution

Decrease of sampling rate: Rate reduction or downsampling


Aliasing and/or loss of spatial details

10

Downsampling

11

Upsampling

nearest neighbor (NN)

12

Upsampling

bilinear

13

Upsampling

bicubic

14

Quantization

15

Scalar quantization
A scalar quantizer Q approximates X by X=Q(X), which takes its values over a finite set. The quantization operation can be characterized by the MSE between the original and the quantized signals Suppose that X takes its values in [a, b], which may correspond to the whole real axis. We decompose [a, b] in K intervals {( yk-1, yk]}1k K of variable length, with y0=a and yK=b. A scalar quantizer approximates all x ( yk-1, yk] by xk:

x ( yk 1 , yk ] ,

Q ( x ) = xk

16

Quantization
A/D conversion quantization
f[n] in L2(Z) Quantizer discrete function fq[n] in L2(Z) uniform fq=Q{f}

f(t) b
tk tk+1

a t

rk

tk tk+1

17

Scalar quantization
The intervals (yk-1, yk] are called quantization bins. Rounding off integers is an example where the quantization bins (yk-1, yk]=(k-1/2, k+1/2] have size 1and xk=k for any kZ. High resolution quantization
Let p(x) be the probability density of the random source X. The mean-square quantization error is

18

HRQ
A quantizer is said to have a high resolution if p(x) is approximately constant on each quantization bin. This is the case if the sizes k are sufficiently small relative to the rate of variation of p(x), so that one can neglect these variations in each quantization bin.
p(x) p(x) HRQ: p(x)0

0 k

19

Scalar quantization
Teorem 10.4 (Mallat): For a high-resolution quantizer, the mean-square error d is minimized when xk=(yk+yk+1)/2, which yields

1 K d = pk k 2 12 k =1

20

Uniform quantizer

21

Quantization
A/D conversion quantization
f in L2(R) Quantizer TRANSFER FUNCTION uniform X=Q{y} fq=Q{f} perceptual discrete function f in L2(Z)

rk

yk yk+1

The sensitivity of the eye decreases increasing the background intensity (Weber law) 22

Quantization
Signal before (blue) and after quantization (red) Q Equivalent noise: n=fq- f additive noise model: fq=f+n

23

Quantization

original

5 levels

10 levels

50 levels

24

Distortion measure
Distortion measure

D = ( fQ f

) ]= ( f
2 K t k +1 k = 0 tk

p ( f ) df

The distortion is measured as the expectation of the mean square error (MSE) difference between the original and quantized signals.

PSNR = 20 log10

255 = 20 log10 MSE

255 1 N M

( I [i, j ] I [i, j ])
i =1 j =1 1 2

Lack of correlation with perceived image quality


Even though this is a very natural way for the quantification of the quantization artifacts, it is not representative of the visual annoyance due to the majority of common artifacts.

Visual models are used to define perception-based image quality assessment metrics

25

Example
The PSNR does not allow to distinguish among different types of distortions leading to the same RMS error between images The MSE between images (b) and (c) is the same, so it is the PSNR. However, the visual annoyance of the artifacts is different

26

Appendix
Convolution

27

Convolution

c (t ) = f (t ) g (t ) = c[ n ] = f [ n ] g [ n ] =

f ( ) g (t ) d f [k ]g[k n]

k =

28

2D Convolution
+ +

c ( x, y ) = f ( x, y ) g ( x, y ) = c[i , k ] =

f ( , ) g ( x , y ) d d

n = m =

f [ n , m ] g [i n , k m ]
filter impulse response rotated by 180 deg

Associativity Commutativity Distributivity


[n,m]

29

c[i , k ] =

n = m =

2D Convolution
f [ n , m ] g [i n , k m ]
f(n,m) m g(n,m) m n

n 1. fold about origin 2. displace by i and k g(i-n,k-m)

3. compute integral of the box f(n,m) g(i-n,k-m)

k i Tricky part: borders (zero padding, mirror...)


30

Convolution
Filtering with filter h(x,y)

sampling property of the delta function

31

Convolution
Convolution is a neighborhood operation in which each output pixel is the weighted sum of neighboring input pixels. The matrix of weights is called the convolution kernel, also known as the filter.
A convolution kernel is a correlation kernel that has been rotated 180 degrees.

Recipe
1. Rotate the convolution kernel 180 degrees about its center element. 2. Slide the center element of the convolution kernel so that it lies on top of the (I,k) element of f. 3. Multiply each weight in the rotated convolution kernel by the pixel of f underneath. Sum the individual products from step 3 zero-padding is generally used at borders but other border conditions are possible

32

Example
kernel
f = [17 23 4 10 11 24 1 8 15 5 7 14 16 6 13 20 22 12 19 21 3 18 25 2 9] h = [8 1 6 3 5 7 4 9 2] h= [2 9 4 7 5 3 6 1 8]

33

Correlation
The operation called correlation is closely related to convolution. In correlation, the value of an output pixel is also computed as a weighted sum of neighboring pixels. The difference is that the matrix of weights, in this case called the correlation kernel, is not rotated during the computation. Recipe
1. Slide the center element of the correlation kernel so that lies on top of the (2,4) element of f. 2. Multiply each weight in the correlation kernel by the pixel of A underneath. 3. Sum the individual products from step 2.

34

Example
kernel
f = [17 23 4 10 11 24 1 8 15 5 7 14 16 6 13 20 22 12 19 21 3 18 25 2 9] h = [8 1 6 3 5 7 4 9 2]

35

You might also like