0% found this document useful (0 votes)
5 views35 pages

DM__ADM__DPCM__ADPCM-1

The document discusses Delta Modulation (DM) and Differential Pulse Code Modulation (DPCM), explaining their principles, advantages, and limitations. Delta Modulation oversamples signals to create a staircase approximation, while DPCM utilizes prediction to reduce redundancy in encoded signals, improving efficiency. Adaptive Differential Pulse Code Modulation (ADPCM) further enhances DPCM by adapting quantization and prediction to varying signal characteristics, allowing for lower bit rates while maintaining quality.

Uploaded by

meghanareddy0105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views35 pages

DM__ADM__DPCM__ADPCM-1

The document discusses Delta Modulation (DM) and Differential Pulse Code Modulation (DPCM), explaining their principles, advantages, and limitations. Delta Modulation oversamples signals to create a staircase approximation, while DPCM utilizes prediction to reduce redundancy in encoded signals, improving efficiency. Adaptive Differential Pulse Code Modulation (ADPCM) further enhances DPCM by adapting quantization and prediction to varying signal characteristics, allowing for lower bit rates while maintaining quality.

Uploaded by

meghanareddy0105
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

DM, ADM, DPCM, ADPCM

1 / 35
Delta Modulation

In Delta modulation: An incoming message signal is oversampled (at


a rate much higher than the Nyquist rate)- To purposely increase the
correlation between the adjacent samples of the signal
Oversampling: Permits the use of a simple quantizing strategy for
constructing the encoded signal
Delta modulation: Provides a staircase approximation to the
oversampled version of the message signal
The staircase approximation of the oversampled version is illustrated
in Figure 1(a)

2 / 35
The Difference between the input and the approximation is quantized
into only two levels: Namely ±∆

Figure 1: Illustration of Delta Modulation

3 / 35
+∆: Corresponds to positive difference
−∆: Corresponds to negative difference
If the approximation falls below the signal at any sampling epoch: It
is increased by ∆
If the approximation lies above the signal: It is diminished by ∆
If the signal does not change too rapidly from sample to sample: We
find that the staircase approximation remains within ±∆ of the input
signal
Let m(t) denote the message signal, mq (t) denotes its staircase
approximation, and m(n) denote the sample of the signal m(t) taken
at t = nTs

e(n) = m(n) − mq (n − 1) (1)


eq (n) = ∆sgn(e(n)) (2)
mq (n) = mq (n − 1) + eq (n) (3)

4 / 35
e(n) is an error signal: representing the difference between the
present sample m(n) of the input signal and the latest approximation
mq (n − 1)
eq (n) is the quantized version of e(n), and sgn(.) is the signum
function
The quantizer output mq (n): is coded to produce delta modulated
signal
Figure 1(a) illustrates the way in which the staircase approximation
mq (t) follows the variations in the input signal m(t): in accordance
with above equations
Figure 1(b): displays the corresponding binary sequence at the delta
modulator output
The rate of information transmission in delta modulation: is equal to
the sampling rate fs = 1/Ts
The advantage of delta modulation: is its simplicity
Transmitter:

5 / 35
Delta modulation may be generated by applying the sampled version
of the incoming signal to the modulator: Involving a comparator,
quantizer, and accumulator
The interconnection of comparator, quantizer, and accumulator in
generator is shown in Figure 2

Figure 2: DM Transmitter

6 / 35
The block labeled Z −1 inside the accumulator represents: a unit delay
(equal to one sampling period)
Comparator: Computes the difference between its two inputs
Quantizer: Consists of a hard limiter with an input-output relation
that is a scaled version of the signum function
The quantizer output is then applied to an accumulator producing the
result:
n
X
mq (n) = ∆ sgn(e(i)) (4)
i=1
n
X
= eq (i) (5)
i=1

At the sampling instant: The accumulator increments the


approximation by a step ∆ in a positive or negative direction-
depending on the algebraic sign of the error sample e(n)
7 / 35
If the input sample m(n) is greater than the most recent
approximation mq (n): a positive increment +∆ is applied to the
approximation
On the other hand: If the input sample is smaller- a negative
increment −∆ is applied to the approximation
Receiver:
The receiver is shown in Figure 3

Figure 3: DM Receiver

8 / 35
In the receiver the staircase approximation mq (t) is reconstructed by
passing the sequence of positive and negative pulses: Produced at the
decoder output through an accumulator in a manner similar to that
used in the transmitter
The out of band quantization noise in the high frequency staircase
waveform mq (t) is rejected by passing it through a LPF- with a
bandwidth equal to the original message bandwidth
Delta modulation is subject to two types of quantization errors:
1 Slope overload distortion
2 Granular noise
Slope overload distortion:
We observe that Equation (3) is the digital equivalent of integration:
it represents the accumulation of positive and negative increments of
magnitude ∆

9 / 35
Denoting the quantization error by q(n)

q(n) = mq (n) − m(n) (6)


mq (n) = m(n) + q(n) (7)

On substituting (7) in (1), we get

e(n) = m(n) − m(n − 1) − q(n − 1) (8)

From above equation we see that except for the quanization error
q(n-1): the quantizer input is a first backward difference of the input
signal- which may be viewed as a digital approximation to the
derivative of the input signal or equivalently as the inverse of the
digital integration process

10 / 35
If we consider the maximum slope of the original input waveform
m(t): the order for the sequence of samples mq (n) need to increase
as fast as the input sequence of samples m(n) in a region of
maximum slope of m(t)- We require that the condition below need to
be satisfied
∆ dm(t)
≥ max| | (9)
Ts dt
Otherwise: we find that the step size ∆ is too small for the staircase
approximation mq (t) to follow a steep segment of the input waveform
m(t)
The mq (t) falling behind m(t) is illustrated in Figure 4 : This
condition is called slope overload- The resulting quantization error is
called slope overload distortion (noise)
Since maximum slope of the staircase approximation mq (t) is fixed by
the step size ∆: As the delta modulator uses a fixed step size it is
often referred to as a linear delta modulator

11 / 35
Figure 4: Illustration of slope overload distortion and Granular noise in
delta modulation

In contrast to slope overload distortion: granular noise occurs when


the step size ∆ is too large relative to the local slope characteristics
of the input waveform m(t)

12 / 35
Granular noise: Causes the staircase approximation mq (t) to hunt
around a relatively flat segment of the input waveform (This is
illustrated in Figure 4)
Granular noise is analogous to quantization noise in a PCM system
We therefore see a need to have:
1 A large step size to accommodate a wide dynamic range
2 A small step size for the accurate representation of relatively low level
signals
The choice of optimum step size that minimizes the mean square
value of the quantization error in a linear delta modulator: will result
a compromise between slope overload and granular noise
To satisfy such requirement: We need to make the delta modulator
”adaptive”- in the sense that the step size is made to vary in
accordance with the input signal

13 / 35
Differential Pulse Code Modulation (DPCM)

When a voice or video signal is sampled at a rate slightly higher than


the Nyquist rate as usually done in pulse-code modulation: The
resulting sampled signal is found to exhibit a high degree of
correlation between adjacent samples
The meaning of this high correlation, in an average sense: the signal
does not change rapidly from one sample to the next
As the signal does not change rapidly from one sample to the next:
the difference between adjacent samples has a variance that is smaller
than the variance of the signal itself
When these highly correlated samples are encoded, as in the standard
PCM system: the resulting encoded signal contains redundant
information
Redundant information: The symbols that are not absolutely essential
to the transmission of information are generated as a result of the
encoding process
14 / 35
By removing this redundancy before encoding: we obtain a more
efficient coded signal- which is the basic idea behind differential
pulse-code modulation
If we know the past behavior of a signal up to a certain point in time:
We may use prediction to make an estimate of a future value of the
signal
Suppose a baseband signal m(t) is sampled at the rate fs = 1/Ts : to
produce the sequence m(n) whose samples are Ts seconds apart
The fact that it is possible to predict future values of the signal m(t):
provides motivation for the differential quantization scheme shown in
Figure 5
In this scheme: the input signal to the quantizer is defined by

e(n) = m(n) − m̂(n) (10)

which is the difference between the unquantized input sample m[n]


and a prediction of it, denoted by m̂(n)

15 / 35
Figure 5: DPCM Transmitter

The predicted value m̂(n) is produced by using a linear prediction


filter: whose input is quantized version of the input sample m[n]
The difference signal e(n) is the prediction error: since it is the
amount by which the prediction filter fails to predict the input exactly

16 / 35
By encoding the quantizer output as in Figure 5 we obtain a variant
of PCM known as differential pulse-code modulation (DPCM)
The quantizer output may be expressed as

eq (n) = e(n) + q(n) (11)

where q(n) is the quantization error


According to Figure 5: the quantizer output eq (n) is added to the
predicted value m̂(n) to produce the prediction-filter input

mq (n) = m̂(n) + eq (n) (12)


mq (n) = m̂(n) + e(n) + q(n) (13)
mq (n) = m(n) + q(n) (14)

From above equation we see that: Irrespective of the properties of the


prediction filter- The quantized sample mq (n) at the prediction filter
input differs from the original input sample m(n) by the quantization
error q(n)
17 / 35
Accordingly, if the prediction is good, the variance of the prediction
error e(n) will be smaller than the variance of m(n)
The quantizer with a given number of levels can be adjusted to
produce a quantization error with a smaller variance than would be
possible: if the input sample m(n) were quantized directly as in a
standard PCM system
The receiver for reconstructing the quantized version of the input is
shown in Figure 6
The receiver consists of a decoder to reconstruct the quantized error
signal

18 / 35
Figure 6: DPCM Receiver

The quantized version of the original input is reconstructed from the


decoder output using the same prediction filter used in the transmitter
In the absence of channel noise: we find that the encoded signal at
the receiver input is identical to the encoded signal at the transmitter
output

19 / 35
Accordingly, the corresponding receiver output is equal to mq (n):
which differs from the original input m(n) only by the quantization
error q(n) (incurred as a result of quantizing the prediction error e(n))
In a noise free environment: The prediction filters in the transmitter
and receiver operate on the same sequence of samples mq (n)
Differential pulse code modulation includes delta modulation as a
special case
The important differences between the DM and DPCM are
1 In DM a one bit (two level) quantizer is used
2 In DM a Single delay element is used in place of the prediction filter
DPCM like DM is subject to slope overload distortion when the input
signal changes too rapidly for the prediction filter to track
Also like PCM: DPCM suffers from quantization noise
Processing Gain:

20 / 35
The output signal to noise ratio of the DPCM system shown in Figure
6 is given by
σ2
(SN R)o = M 2 (15)
σQ
where σM2 is the variance of the original input sample m[n]- assumed

to be of zero mean
2 is the variance of the quantization error q(n)
σQ
The above equation can be rewritten as
2 σ2
σM E
(SN R)o = 2 σ2 (16)
σE Q
= Gp (SN R)Q (17)

where σE 2 is the variance of the prediction error.The factor (SN R)


Q
is the signal to quantization noise ratio, defined by
2
σE
(SN R)Q = 2 (18)
σQ
21 / 35
The other factor Gp is the processing gain: produced by the
differential quantization scheme, defined by
2
σM
Gp = 2 (19)
σE

The quantity Gp : when greater than unity- represents a gain in signal


to noise ratio that is due to the differential quantization scheme
2 is fixed- The
For a given baseband (message) signal: the variance σM
2
Gp is maximized by minimizing the variance σE of the prediction error
e(n)
Accordingly: objective should be design the prediction filter so as to
minimize σE2

In the case of voice signals: It is found that the optimum signal to


quantization noise advantage of DPCM over standard PCM is: 4 to
11 dB
The greatest improvement (in DPCM) occurs: when going from no
prediction to first order prediction
22 / 35
Some additional gain resulting from: Increasing the order of the
prediction filter upto 4 or 5 (after which little additional gain is
obtained)
SNR (dB) for PCM is 1.8 + 6R: 6dB of SNR is equivalent to 1 bit
per sample
Accordingly the advantage of DPCM may be expressed in terms of bit
rate:
1 For a constant SNR: assuming a sampling rate of 8 kHz- the use of
DPCM may provide a saving of about 8 to 16 kb/s (i.e., 1 to 2 bits per
sample) compared to the standard PCM

23 / 35
Adaptive Differential Pulse Code Modulation (ADPCM)

The use of PCM for speech coding at the standard rate of 64kb/s:
demands a high channel bandwidth for its transmission
In certain applications: such as secure transmission over radio
channels that are inherently of low capacity- channel bandwidth is at
a premium
In applications of this kind ( where the channel capacity is low): there
is a need for speech coding at low bit rates- while maintaining
acceptable fidelity or quality of reproduction
For coding speech at low bit rates: a waveform coder of prescribed
configuration is optimized by exploiting both statistical
characterization of speech waveforms and properties of hearing
In particular: The design philosophy has two aims in mind:
1 To remove the redundancies from the speech signal as far as possible
2 To assign the available bits to code the non redundant parts of the
speech signal in a perceptually efficient manner

24 / 35
To reduce the bit rate from 64 kb/s (used in standard PCM) to 32,
16, 8, and 4 kb/s: the schemes used for redundancy removal and bit
assignment become increasingly more sophisticated
As a rule of thumb the reduction in bit rate from 64 kb/s to 8 kb/s
range: increases the computational complexity (measured in terms of
multiply and add operations)
Adaptive differential pulse code modulation(ADPCM): permits the
coding of speech at 32 kb/s through the combined use of adaptive
quantization and adaptive prediction
In ADPCM: the number of eight bits per sample required in the
standard PCM is reduced to four
The term adaptive: means being responsive to changing level and
spectrum of the input speech signal

25 / 35
The variation of performance with speakers and speech material,
together with variations in signal level inherent in the speech
communication process: make the combined use of adaptive
quantization and adaptive prediction necessary to achieve best
performance over a wide range of speakers and speaking situations
Adaptive quantization: refers to a quantizer that operates with a time
varying step size ∆(n)
At any given sampling instant identified by the index n: the adaptive
quantizer is assumed to have a uniform transfer characteristics
2 of the
The step size ∆(n) is varied so as to match the variance σM
input sample m(n)
∆(n) = ϕσˆM (n) (20)
where ϕ is a constant, and σˆM (n) is an estimate of the standard
deviation σM (n) (square root of variance)
For a non stationary input: σM (n) is time varying

26 / 35
The problem of adaptive quantization according to Equation (20): is
computing the estimate σˆM (n) continuously
The implementation of Equation (20) may proceed in two ways:
1 Adaptive quantization with forward estimation (AQF): in which
unquantized samples of the input signal are used to derive the forward
estimates of σM (n)
2 Adaptive quantization with backward estimation (AQB): in which
samples of the quantizer output are used to derive backward estimates
of σM (n)
AQF scheme:
1 requires the use of a buffer to store unquantized samples of the input
speech signal needed for the learning period
2 requires the explicit transmission of level information (typically, about 5
to 6 bits per step-size sample) to a remote decoder- thereby burdening
the system with additional side information that has to be transmitted
to the receiver
3 introduces a processing delay (on the order of 16 ms for speech) in the
encoding operations- which is unacceptable in some applications

27 / 35
The problems of buffering, level transmission, and delay intrinsic to
AQF: are all avoided in AQB
In AQB: the recent history of the quantizer output is used to extract
information for the computation of the step size ∆(n)
In practice: AQB is therefore usually preferred over AQF
The block diagram of AQB is shown in Figure 7
AQB represents a nonlinear feedback system: hence it is not obvious
that the system will be stable
If the quantizer input m(n) is bounded: Then the backward estimate
σˆM (n) and the corresponding step size ∆(n) are bounded- under
such a condition the system is indeed stable

28 / 35
Figure 7: Adaptive Quantization with backward estimation (AQB)

The use of adaptive prediction in ADPCM is justified: because speech


signals are inherently nonstationary
nonstationary: phenomenon in which the autocorrelation function and
power spectral density of speech signals are time-varying functions of
their respective arguments.

29 / 35
This implies that the design of predictors for such inputs should
likewise be time varying, that is, adaptive
As with adaptive quantization, there are two schemes for performing
adaptive prediction:
1 Adaptive prediction with forward estimation (APF): in which
unquantized samples of the input signal are used to derive estimates of
the predictor coefficients.
2 Adaptive prediction with backward estimation (APB), in which samples
of the quantizer output and the prediction error are used to derive
estimates of the predictor coefficients.
APF suffers from the same intrinsic disadvantages: side information,
buffering, and delay as AQF
These disadvantages are eliminated by using: APB scheme
The block diagram for Adaptive Prediction with Backward Estimation
(APB) is shown in Figure 8

30 / 35
Figure 8: Adaptive Prediction with backward estimation (APB)

31 / 35
The box labeled “logic for adaptive prediction” represents the
algorithm for updating the predictor coefficients
In APB: the optimum predictor coefficients are estimated on the basis
of quantized and transmitted data- predictor coefficients can therefore
be updated as frequently as desired, say, from sample to sample
APB is the preferred method of prediction for ADPCM
The LMS algorithm for the predictor and an adaptive scheme for the
quantizer have been combined in a synchronous fashion for the design
of both the encoder and decoder
The performance of this combination is so impressive at 32 kb/s that
ADPCM is accepted internationally as a standard coding technique
for voice signal signals: along with 64 kb/s using standard PCM

32 / 35
Adaptive Delta Modulation (ADM)

A simple form of Adaptive quantization with backward estimation


(AQB): found to be in the modification of linear delta modulation
(LDM) to form Adaptive delta modulation (ADM)
Principle of ADM:
1 If successive errors are of opposite polarity: Then the delta modulator
is operating in its granular mode- In this case it may be advantageous
to reduce the step size
2 If successive errors are of same polarity: Then the delta modulator is
operating in its slope-overload mode- In this case the step size should
be increased
By varying the step size in accordance with this principle: The delta
modulator is enabled to cope with changes in the input signal
Figure 9 shows the block diagram of an ADM based on increasing or
decreasing the step size by a factor of 50 percent at each iteration of
the adaptive process

33 / 35
Figure 9: Adaptive delta modulation system: (a) Transmitter. (b) Receiver.

34 / 35
The algorithm for the adaptation of the step size is defined by
( |∆[n−1]| 
mq (n) (mq (n) + 0.5 mq (n − 1)) , if ∆(n − 1) ≥ ∆min
∆(n) =
∆min , if ∆(n − 1) < ∆min

where ∆(n) is the step size at iteration (time step) n of the algorithm
mq (n) is the 1-bit quantizer output that equals ±1

35 / 35

You might also like