Engineering Chemistry
Engineering Chemistry
Unit 1
Sampling Theorem
Sampling is the process of converting a signal (for example, a function of continuous
time or space) into a numeric sequence (a function of discrete time or space). Shannon's
version of the theorem states.
Fig. 1: The normalized sinc function: sin(πx) / (πx) ... showing the central peak atx= 0,
and zero-crossings at the other integer values of x.
The symbol is customarily used to represent the interval between samples.
And the samples of function x are denoted by x(nT), for all integer values of n. The
mathematically ideal way to interpolate the sequence involves the use of sinc functions,
like those shown in Fig 2. Each sample in the sequence is replaced by a sinc function,
centered on the time axis at the original location of the sample (nT), with the amplitude
of the sinc function scaled to the sample value, x(nT). Subsequently, the sinc functions
are summed into a continuous function. A mathematically equivalent method is to
convolve one sinc function with a series of Dirac deltapulses, weighted by the sample
values. Neither method is numerically practical. Instead, some type of approximation of
the sinc functions, finite in length, should be utilized. The imperfections attributable to
the approximation are known as interpolation error.
Practical digital-to-analog converters produce neither scaled and delayed sinc functions,
nor ideal Dirac pulses. Instead they produce a piecewise-constant sequence of scaled
and delayed rectangular pulses, usually followed by a "shaping filter" to clean
up spurious high-frequency content.
Aliasing
Fig. 2: The samples of several different sine waves can be identical, when at least one of
them is at a frequency above half the sample rate
Let X(f) be the Fourier transform of bandlimited function x(t)
And
which is a periodic function and its equivalent representation as a Fourier series, whose
coefficients are x[n]. This function is also known as the discrete-time Fourier
transform(DTFT).
If the Nyquist criterion is not satisfied, adjacent copies overlap, and it is not possible in
general to discern an unambiguous X(f). Any frequency component above fs/2 is
indistinguishable from a lower-frequency component, called an alias, associated with
one of the copies. In such cases, the customary interpolation techniques produce the
alias, rather than the original component. When the sample-rate is pre-determined by
other considerations (such as an industry standard), x(t) is usually filtered to reduce its
high frequencies to acceptable levels before it is sampled. The type of filter required is
a lowpass filter, and in this application it is called an anti-aliasing filter
Unit 1
Sampling of Band Pass Signal
Although satisfying the majority of sampling requirements, the sampling of low-pass
signals, as in Figure 2-6, is not the only sampling scheme used in practice. We can use a
technique known as bandpass sampling to sample acontinuous bandpass signal that is
centered about some frequency other than zero Hz. When a continuous input signal's
bandwidth and center frequency permit us to do so, bandpass sampling not only
reduces the speed requirement of A/D converters below that necessary with traditional
low-pass sampling; it also reduces the amount of digital memory necessary to capture a
given time interval of a continuous signal.
By way of example, consider sampling the band-limited signal shown in Figure 2-
7(a) centered at fc= 20 MHz, with a bandwidth B = 5 MHz. We use the term bandpass
sampling for the process of sampling continuous signals whose center frequencies have
been translated up from zero Hz. What we're calling bandpass sampling goes by various
other names in the literature, such as IF sampling, harmonic sampling[2], sub-Nyquist
sampling, and under sampling. In bandpass sampling, we're more concerned with a
signal's bandwidth than its highest frequency component. Note that the negative
frequency portion of the signal, centered at –fc, is the mirror image of the positive
frequency portion—as it must be for real signals. Our bandpass signal's highest
frequency component is 22.5 MHz. Conforming to the Nyquist criterion (sampling at
twice the highest frequency content of the signal) implies that the sampling
frequency must be a minimum of 45 MHz.
Figure 2 Bandpass signal sampling: (a) original continuous signal spectrum; (b) sampled
signal spectrum replications when sample rate is 17.5 MHz.
Figure 2 Bandpass sampling frequency limits: (a) sample rate fs' = (2fc – B)/6; (b)
sample rate is less than fs'; (c) minimum sample rate fs" < fs'.
If we reduce the sample rate below the fs' value shown in Figure 2-8(a), the spacing
between replications will decrease in the direction of the arrows in Figure 2-8(b). Again,
the original spectra do not shift when the sample rate is changed. At some new
sample rate fs", where fs'' < fs', the replication P' will just butt up against the positive
original spectrum centered at fc as shown. In this condition, we know that
Should fs" be decreased in value, P' will shift further down in frequency and start to
overlap with the positive original spectrum at fc and aliasing occurs. Therefore, from Eq.
(2-8) and for m+1, there is a frequency that the sample rate must always exceed, or
where m is an arbitrary, positive integer ensuring that fs ≥ 2B. (For this type of periodic
sampling of real signals, known as real or first-order sampling, the Nyquist criterion fs ≥
2B must still be satisfied.)
Unit 1
Pulse amplitude Modulation
Pulse Amplitude Modulation (PAM)
Figure 2-2 and Figure 2-3 show the time domain appearance of a PAM signal for a triangle
wave message signal. In the figures you can see that the PAM signal is made up of small
segments (samples) of the message signal. As shown in the figures, two types of sampling
are possible. Figure 2-2 represents natural sampling while Figure 2-3 is the result
obtained with flat-top sampling.
For a PAM signal produced with natural sampling, the sampled signal follows the
waveform of the input signal during the time that each sample is taken. Flat-top
sampling, however, produces pulses whose amplitude remains fixed during the sampling
time. The amplitude value of the pulse depends on the amplitude of the input signal at
the time of sampling.
The switch is closed for the duration of each pulse allowing the message signal at that
sampling time to become part of the output. The switch is open for the remainder
of each sampling period making the output zero. This type of sampling is called natural
sampling.
Figure 2-6 shows the relationship between the message signal, the sampling signal,
and the resulting PAM signal using natural sampling.
Figure 2-6. Natural sampling.
Sampling rate
The repetition rate of the sampling signal is called the sampling rate, or sampling
frequency, and is abbreviated f s . Observation in the time domain shows that,
when the sampling rate f s is much greater than the frequency of the message
signal f m , the PAM signal clearly resembles the message signal. If the sampling rate is
reduced, or the message signal frequency is increased, the resemblance is less visible. It
is neither f s nor f m alone that determines the degree of resemblance between
the PAM and message signals, it is the ratio f s /f m . The lower this ratio, the less
the resemblance.
If the pulses are narrow, PAM signals require little power for transmission and lend
themselves easily to time-division multiplexing. Flat-topped pulses are easily regenerated
by repeater stations and can be used for transmission over long distances.
However, unlike other types of pulse modulation, PAM signals are affected by noise
as much as analog signals are. Using PAM, therefore, offers little resistance or protection
against noise in the transmission channel.
Unit 1
Pulse Width Modulation, Pulse Position Modulation
Pulse width modulation (PWM) is a powerful technique for controlling analog circuits
with a microprocessor's digital outputs. PWM is employed in a wide variety of
applications, ranging from measurement and communications to power control and
conversion.
Analog circuits
An analog signal has a continuously varying value, with infinite resolution in both time
and magnitude. A nine-volt battery is an example of an analog device, in that its output
voltage is not precisely 9V, changes over time, and can take any real-numbered value.
Similarly, the amount of current drawn from a battery is not limited to a finite set of
possible values. Analog signals are distinguishable from digital signals because the latter
always take values only from a finite set of predetermined possibilities, such as the set
{0V, 5V}.
Analog voltages and currents can be used to control things directly, like the volume of a
car radio. In a simple analog radio, a knob is connected to a variable resistor. As you turn
the knob, the resistance goes up or down. As that happens, the current flowing through
the resistor increases or decreases. This changes the amount of current driving the
speakers, thus increasing or decreasing the volume. An analog circuit is one, like the
radio, whose output is linearly proportional to its input.
As intuitive and simple as analog control may seem, it is not always economically
attractive or otherwise practical. For one thing, analog circuits tend to drift over time and
can, therefore, be very difficult to tune. Precision analog circuits, which solve that
problem, can be very large, heavy (just think of older home stereo equipment), and
expensive. Analog circuits can also get very hot; the power dissipated is proportional to
the voltage across the active elements multiplied by the current through them. Analog
circuitry can also be sensitive to noise. Because of its infinite resolution, any perturbation
or noise on an analog signal necessarily changes the current value.
Digital control
By controlling analog circuits digitally, system costs and power consumption can be
drastically reduced. What's more, many microcontrollers and DSPs already include on-
chip PWM controllers, making implementation easy.
In a nutshell, PWM is a way of digitally encoding analog signal levels. Through the use of
high-resolution counters, the duty cycle of a square wave is modulated to encode a
specific analog signal level. The PWM signal is still digital because, at any given instant of
time, the full DC supply is either fully on or fully off. The voltage or current source is
supplied to the analog load by means of a repeating series of on and off pulses. The on-
time is the time during which the DC supply is applied to the load, and the off-time is the
period during which that supply is switched off. Given a sufficient bandwidth, any analog
value can be encoded with PWM.
Figure 2.7 shows three different PWM signals. Figure 1a shows a PWM output at a 10%
duty cycle. That is, the signal is on for 10% of the period and off the other 90%. Figures 1b
and 1c show PWM outputs at 50% and 90% duty cycles, respectively. These three PWM
outputs encode three different analog signal values, at 10%, 50%, and 90% of the full
strength. If, for example, the supply is 9V and the duty cycle is 10%, a 0.9V analog signal
results.
Figure 2.8 shows a simple circuit that could be driven using PWM. In the figure, a 9V
battery powers an incandescent lightbulb. If we closed the switch connecting the battery
and lamp for 50ms, the bulb would receive 9V during that interval. If we then opened the
switch for the next 50ms, the bulb would receive 0V. If we repeat this cycle 10 times a
second, the bulb will be lit as though it were connected to a 4.5V battery (50% of 9V). We
say that the duty cycle is 50% and the modulating frequency is 10Hz.
Most loads, inductive and capacitative alike, require a much higher modulating frequency
than 10Hz. Imagine that our lamp was switched on for five seconds, then off for five
seconds, then on again. The duty cycle would still be 50%, but the bulb would appear
brightly lit for the first five seconds and off for the next. In order for the bulb to see a
voltage of 4.5 volts, the cycle period must be short relative to the load's response time to
a change in the switch state. To achieve the desired effect of a dimmer (but always lit)
lamp, it is necessary to increase the modulating frequency. The same is true in other
applications of PWM. Common modulating frequencies range from 1kHz to 200kHz.
Pulse Position Modulation
In Pulse Position Modulation the amplitude of the pulse is kept constant as in the case of
the FM and PWM to avoid noise interference. Unlike the PWM the pulse width is kept
constant to achieve constant transmitter power. The modulation is done by varying the
position of the pulse from the mean position according to the variations in the amplitude
of the modulating signal. This article discusses the technique of generating a PPM wave
corresponding to a modulating sine wave.
DESCRIPTION:
The Pulse Position Modulation (PPM) can be actually easily generated from a PWM
waveform which has been modulated according to the input signal waveform. The
technique is to generate a very small pulse of constant width at the end of the duty time
of each and every PWM pulses. The PPM modulation of an input signal can be achieved
using the following circuit blocks:
The circuit diagram of the variable frequency sine wave oscillator is shown
Unit 1
Digital Transmission of Analog Signal
After the sampling we have a sequence of numbers which can theoretically still take on
any value on a continuous range of values. Because this range in continuous, there are
infinitely many possible values for each number, in fact even uncountably infinitely many.
In order to be able to represent each number from such a continuous range, we would
need an infinite number of digits - something we don’t have. Instead, we must represent
our numbers with a finite number of digits, that is: after discretizing the time-variable, we
now have to discretize the amplitude-variable as well. This discretization of the amplitude
values is called quantization. Assume, our sequence takes on values in the range between
−1... + 1. Now assume that we must represent each number from this range with just two
decimal digits: one before and one after the point. Our possible amplitude values are
therefore: −1.0, −0.9, . . . , −0.1, 0.0, 0.1, . . . , 0.9, 1.0. These are exactly 21 distinct levels
for the amplitude and we will denote this number of quantization levels with N q . Each
level is a step of 0.1 higher than its predecessor and we will denote this quantization
stepsize as q. Now we assign to each number from our continuous range that
quantization level which is closest to our actual amplitude: the range −0.05... + 0.05 maps
to quantization level 0.0, the range 0.05...0.15 maps to 0.1 and so on. That mapping can
be viewed as a piecewise constant function acting on our continuous amplitude variable x.
This is depicted in figure shown below. Note, that this mapping also includes a clipping
operation at ±1: values larger than 0.95 are mapped to quantization level 1, no matter
how large, and analogous for negative values.
When forcing an arbitrary signal value x to its closest quantization level x q , this x q value
can be seen as x plus some error. We will denote that error as e q (for quantization error)
and so we have:
xq = x + eq ⇔ eq = xq − x (3)
The quantization error is restricted to the range −q/2... + q/2 - we will never make an error
larger than half of the quantization step size. When the signal to be sampled has no
specific relationship to the sampling process, we can assume, that this quantization error
(now treated as a discrete time signal eq[n]) will manifest itself as additive white noise
with equal probability for all error values in the range −q/2... + q/2. Mathematically, we
can view the error-signal as a random signal with a uniform probability density function
between −q/2 and +q/2, that is:
For this reason, the quantization error is often also called quantization noise. A more
rigorous mathematical justification for the treatment of the quantization error as
uniformly distributed white noise is provided by Widrow’s Quantization Theorem, but we
won’t pursue that any further here. We define the signal-to-noise ratio of a system as the
ratio between the RMS-value of the wanted signal and the RMS value of the noise
expressed in decibels:
where the RMS value is the square root of the (average) power of the signal. Denoting the
power of the signal as P x and the power of the error as P e and using an elementary rule
for logarithms, we can rewrite this as:
The power of the quantization error is given by the variance of the associated continuous
random variable and comes out as:
Unit 2/ Lecture 6
Pulse Code Modulation
Pulse code Modulation: The pulse code modulator technique samples the input signal
x(t) at a sampling frequency. This sampled variable amplitude pulse is then digitalized by
the analog to digital converter. Figure.(1) shows the PCM generator.
In the PCM generator, the signal is first passed through sampler which is sampled at a
rate of (fs) where:
fs≥ 2𝑓 m
The output of the sampler x(nT s ) which is discrete in time is fed to a q-level quantizer.
The quantizer compares the input x(nT s ) with it's fixed levels. It assigns any one
of the digital level to x(nT s ) that results in minimum distortion or error. The error is
called quantization error, thus the output of the quantizer is a digital level called q(nT s ).
The quantized signal level q(nT s ) is binary encode. The encoder converts the input
signal to v digits binary word.
Figure.(3) shows the block diagram of the PCM receiver. The receiver starts by
reshaping the received pulses, removes the noise and then converts the binary
bits to analog. The received samples are then filtered by a low pass filter; the cut off
frequency is at fc .
Fc=fm
It is impossible to reconstruct the original signal x(t) because of the permanent
quantization error introduced during quantization at the transmitter. The
quantization error can be reduced by the increasing quantization levels. This
corresponds to the increase of bits per sample(more information). But increasing
bits (v) increases the signaling rate and requires a large transmission bandwidth. The
choice of the parameter for the number of quantization levels must be acceptable with
the quantization noise (quantization error). Figure.(4) shows the reconstructed
signal.
Let an input signal x(nTs ) have an amplitude in the range of xmax to - xmax The total
amplitude range is :
Total amplitude = xmax -(- xmax )
=2 xmax
If the amplitude range is divided into 'q' levels of quantizer, then the step size '∆'.
If ∆ is small it can be assumed that the quantization error is uniformly distributed.
The quantization noise is uniformly distributed in the interval [-∆/2, ∆/2 ]. The figure.(5)
shows the uniform distribution of quantization noise:
A-Law Compander
A-law is the CCITT recommended companding standard used across Europe. Limiting the
linear sample values to 12 magnitude bits, the A-law compression is defined by Equation
1, where A is the compression parameter (A=87.7 in Europe), and x is the normalized
integer to be compressed.
mu-Law Compander
The United States and Japan use m-law companding. Limiting the linear sample values to
13 magnitude bits, the m-law compression is defined by Equation 2, where m is the
compression parameter (m =255 in the U.S. and Japan) and x is the normalized integer
to be compressed.
Scrambling
Q.1 A 6-bit single channel PCM system gives an output of 60 kbits June-2011 [10]
per second. Determine the highest possible modulating
frequency for the system.
Q.2 What is quantization noise? Determine its expression. June-2010 [6]
Q.3 With the help of block diagram explain the PCM Dec-2010 [10]
communication system.
Q.4 Explain the need of quantization. How it is done? What should June-2011
be the limitation for selecting the step size?
Q.5 What is companding? How is the dynamic range improved June-2010
using companding?
Q.6 What is companding? Calculate the expression of signal to Dec-2010
quantization noise ratio.
Unit 2/ Lecture 8
Differential PCM
Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline
of pulse-code modulation (PCM) but adds some functionalities based on the prediction
of the samples of the signal. The input can be an analog signal or a digital signal.
Option 1: take the values of two consecutive samples; if they are analog
samples, quantize them; calculate the difference between the first one and the next;
the output is the difference, and it can be further entropy coded.
Option 2: instead of taking a difference relative to a previous input sample, take the
difference relative to the output of a local model of the decoder process; in this option,
the difference can be quantized, which allows a good way to incorporate a controlled
loss in the encoding.
--------each segment of the approximated signal is compared to the original analog wave
to determine the increase or decrease in relative amplitude.
--------the decision process for establishing the state of successive bits is determined by
this comparison.
--------only the change of information is sent, that is, only an increase or decrease of the
signal amplitude from the previous sample is sent whereas a no-change condition causes
the modulated signal to remain at the same 0 or 1 state of the previous sample.
Depending on the sign of e(nTs) one bit quantizer produces an output step of +δ or –δ. If
the step size is +δ, then binary '1' is transmitted. If the step size is -δ, then binary '0' is
transmitted.
Q.1 With the help of a block diagram explain Delta Modulation Dec-2010 [10]
System. June- 2010 [10]
Q.2 Why DM is preferred over PCM? June- 2011 [5]
Q.3 Explain the delta modulation in detail with suitable diagram. June- 2011 [5]
Explain ADM and compare its performance with DM.
Q.4 Explain DPCM. How is it different from PCM. June-2011 [10]
Unit 2/ Lecture 10
Adaptive Delta Modulation, Vocoders
To reduce the possibility of slope overload the step size can be increased (for the same
sampling rate). This is illustrated in Figure 2. The sawtooth is better able to
match the message in the regions of steep slope.
Referring to Figure 1, the sawtooth follows the message being sampled quite well in the
regions of small slope. To reduce the slope overload the step size is increased as shown
in Figure 2, and this time, the match over the regions of small slope has been degraded.
The degradation shows up, at the demodulator, as increased quantizing noise, or
‘granularity’.
There is a conflict between the requirements for minimization of slope overload and the
granular noise. The one requires an increased step size, the other a reduced step
size.
There is a way to overcome this problem which adjusts the step size according to the
slope of the signal being sampled. This is a variation of the basic delta modulation and is
called the adaptive delta modulation.
A large step size is required when sampling those parts of the input waveform of steep
slope. But a large step size worsens the granularity of the sampled signal when the
waveform being sampled is changing slowly. A small step size is preferred in regions
where the message has a small slope. This suggests the need for a controllable
step size – the control being sensitive to the slope of the sampled signal. This can be
implemented by an arrangement such as is illustrated in Figure 4
The gain of the amplifier is adjusted in response to a control voltage from the sampler,
which signals the onset of slope overload. The step size is proportional to the amplifier
gain.
Slope overload is indicated by a succession of output pulses of the same sign. The
sampler monitors the delta modulated signal, and signals when there is no change of
polarity over 3 or more successive samples. The actual Adaptive Control signal is +2
volt under ‘normal’ conditions, and rises to +4 volt when slope overload is detected.
The gain of the amplifier, and hence the step size, is made proportional to this
control voltage. Provided the slope overload was only moderate the approximation
will ‘catch up’ with the wave being sampled.
The gain will then return to normal until the sampler again falls behind.
The Voltage Controlled Amplifier - VCA can be modeled with a multiplier. This is shown
in Figure 5. The control in this figure is shown as a DC voltage. This may be set to any
value in the range ±V max . Beyond V max , the multiplier will overload.
Vocoders
Pulse code Modulation: The pulse code modulator technique samples the input signal
x(t) at a sampling frequency. This sampled variable amplitude pulse is then digitalized by
the analog to digital converter. Figure.(1) shows the PCM generator.
In the PCM generator, the signal is first passed through sampler which is sampled at a
rate of (fs) where:
fs≥ 2𝑓 m
The output of the sampler x(nT s ) which is discrete in time is fed to a q-level quantizer.
The quantizer compares the input x(nT s ) with it's fixed levels. It assigns any one
of the digital level to x(nT s ) that results in minimum distortion or error. The error is
called quantization error, thus the output of the quantizer is a digital level called q(nT s ).
The quantized signal level q(nT s ) is binary encode. The encoder converts the input
signal to v digits binary word.
Figure.(3) shows the block diagram of the PCM receiver. The receiver starts by
reshaping the received pulses, removes the noise and then converts the binary
bits to analog. The received samples are then filtered by a low pass filter; the cut off
frequency is at fc .
Fc=fm
It is impossible to reconstruct the original signal x(t) because of the permanent
quantization error introduced during quantization at the transmitter. The
quantization error can be reduced by the increasing quantization levels. This
corresponds to the increase of bits per sample(more information). But increasing
bits (v) increases the signaling rate and requires a large transmission bandwidth. The
choice of the parameter for the number of quantization levels must be acceptable with
the quantization noise (quantization error). Figure.(4) shows the reconstructed
signal.
Let an input signal x(nTs ) have an amplitude in the range of xmax to - xmax The total
amplitude range is :
Total amplitude = xmax -(- xmax )
=2 xmax
If the amplitude range is divided into 'q' levels of quantizer, then the step size '∆'.
If ∆ is small it can be assumed that the quantization error is uniformly distributed.
The quantization noise is uniformly distributed in the interval [-∆/2, ∆/2 ]. The figure.(5)
shows the uniform distribution of quantization noise:
Companding
A-Law Compander
A-law is the CCITT recommended companding standard used across Europe. Limiting the
linear sample values to 12 magnitude bits, the A-law compression is defined by Equation
1, where A is the compression parameter (A=87.7 in Europe), and x is the normalized
integer to be compressed.
mu-Law Compander
The United States and Japan use m-law companding. Limiting the linear sample values to
13 magnitude bits, the m-law compression is defined by Equation 2, where m is the
compression parameter (m =255 in the U.S. and Japan) and x is the normalized integer
to be compressed.
Scrambling
Option 1: take the values of two consecutive samples; if they are analog
samples, quantize them; calculate the difference between the first one and the next;
the output is the difference, and it can be further entropy coded.
Option 2: instead of taking a difference relative to a previous input sample, take the
difference relative to the output of a local model of the decoder process; in this option,
the difference can be quantized, which allows a good way to incorporate a controlled
loss in the encoding.
Delta Modulation
--------each segment of the approximated signal is compared to the original analog wave
to determine the increase or decrease in relative amplitude.
--------the decision process for establishing the state of successive bits is determined by
this comparison.
--------only the change of information is sent, that is, only an increase or decrease of the
signal amplitude from the previous sample is sent whereas a no-change condition causes
the modulated signal to remain at the same 0 or 1 state of the previous sample.
Depending on the sign of e(nTs) one bit quantizer produces an output step of +δ or –δ. If
the step size is +δ, then binary '1' is transmitted. If the step size is -δ, then binary '0' is
transmitted.
To reduce the possibility of slope overload the step size can be increased (for the same
sampling rate). This is illustrated in Figure 2. The sawtooth is better able to
match the message in the regions of steep slope.
There is a conflict between the requirements for minimization of slope overload and the
granular noise. The one requires an increased step size, the other a reduced step
size.
There is a way to overcome this problem which adjusts the step size according to the
slope of the signal being sampled. This is a variation of the basic delta modulation and is
called the adaptive delta modulation.
A large step size is required when sampling those parts of the input waveform of steep
slope. But a large step size worsens the granularity of the sampled signal when the
waveform being sampled is changing slowly. A small step size is preferred in regions
where the message has a small slope. This suggests the need for a controllable
step size – the control being sensitive to the slope of the sampled signal. This can be
implemented by an arrangement such as is illustrated in Figure 4
The gain of the amplifier is adjusted in response to a control voltage from the sampler,
which signals the onset of slope overload. The step size is proportional to the amplifier
gain.
Slope overload is indicated by a succession of output pulses of the same sign. The
sampler monitors the delta modulated signal, and signals when there is no change of
polarity over 3 or more successive samples. The actual Adaptive Control signal is +2
volt under ‘normal’ conditions, and rises to +4 volt when slope overload is detected.
The gain of the amplifier, and hence the step size, is made proportional to this
control voltage. Provided the slope overload was only moderate the approximation
will ‘catch up’ with the wave being sampled.
The gain will then return to normal until the sampler again falls behind.
The Voltage Controlled Amplifier - VCA can be modeled with a multiplier. This is shown
in Figure 5. The control in this figure is shown as a DC voltage. This may be set to any
value in the range ±V max . Beyond V max , the multiplier will overload.
Fig 5 The Voltage Controlled Amplifier
Vocoders
This yields two phases, 0 and π. In the specific form, binary data is often conveyed with
the following signals:
for binary "0"
where 1 is represented by √𝐸𝑏 ∅(𝑡) and 0 is represented by -√𝐸𝑏 ∅(𝑡). This assignment
is, of course, arbitrary.
This use of this basis function is shown at the end of the next section in a signal timing
diagram. The topmost signal is a BPSK-modulated cosine wave that the BPSK modulator
would produce. The bit-stream that causes this output is shown above the signal (the
other parts of this figure are relevant only to QPSK).
Bit error rate
The bit error rate (BER) of BPSK in AWGN can be calculated as:
or
Since there is only one bit per symbol, this is also the symbol error rate.
Unit 3
Differential Phase Shift Keying
DPSK/ Differential Encoded Phase Shift Keying
Differential phase shift keying (DPSK) is a common form of phase modulation that
conveys data by changing the phase of the carrier wave. As mentioned for BPSK and
QPSK there is an ambiguity of phase if the constellation is rotated by some effect in
the communications channel through which the signal passes. This problem can be
overcome by using the data to change rather than set the phase.
For example, in differentially encoded BPSK a binary '1' may be transmitted by adding
180° to the current phase and a binary '0' by adding 0° to the current phase. Another
variant of DPSK is Symmetric Differential Phase Shift keying, SDPSK, where encoding
would be +90° for a '1' and -90° for a '0'.
In differentially encoded QPSK (DQPSK), the phase-shifts are 0°, 90°, 180°, -90°
corresponding to data '00', '01', '11', '10'. This kind of encoding may be demodulated in
the same way as for non-differential PSK but the phase ambiguities can be ignored.
Thus, each received symbol is demodulated to one of the M points in the constellation
and acomparator then computes the difference in phase between this received signal
and the preceding one. The difference encodes the data as described above. Symmetric
Differential Quadrature Phase Shift Keying (SDQPSK) is like DQPSK, but encoding is
symmetric, using phase shift values of -135°, -45°, +45° and +135°.
The modulated signal is shown below for both DBPSK and DQPSK as described above. In
the figure, it is assumed that the signal starts with zero phase, and so there is a phase
shift in both signals at .
Timing diagram for DBPSK and DQPSK. The binary data stream is above the DBPSK signal.
The individual bits of the DBPSK signal are grouped into pairs for the DQPSK signal,
which only changes every Ts = 2Tb.
Analysis shows that differential encoding approximately doubles the error rate
compared to ordinary -PSK but this may be overcome by only a small increase
in . Furthermore, this analysis (and the graphical results below) are based on a
system in which the only corruption is additive white Gaussian noise(AWGN). However,
there will also be a physical channel between the transmitter and receiver in the
communication system. This channel will, in general, introduce an unknown phase-shift
to the PSK signal; in these cases the differential schemes can yield a better error-rate
than the ordinary schemes which rely on precise phase information.
Demodulation
BER comparison between DBPSK, DQPSK and their non-differential forms using gray-
coding and operating in white noise.
For a signal that has been differentially encoded, there is an obvious alternative method
of demodulation. Instead of demodulating as usual and ignoring carrier-phase
ambiguity, the phase between two successive received symbols is compared and used to
determine what the data must have been. When differential encoding is used in this
manner, the scheme is known as differential phase-shift keying (DPSK). Note that this is
subtly different from just differentially encoded PSK since, upon reception, the received
symbols are not decoded one-by-one to constellation points but are instead compared
directly to one another.
Call the received symbol in the th timeslot and let it have phase . Assume
without loss of generality that the phase of the carrier wave is zero. Denote
the AWGN term as . Then
The decision variable for the th symbol and the th symbol is the phase difference
where superscript * denotes complex conjugation. In the absence of noise, the phase of
this is , the phase-shift between the two received signals which can be used
to determine the data transmitted.
The probability of error for DPSK is difficult to calculate in general, but, in the case of
DBPSK it is:
which, when numerically evaluated, is only slightly worse than ordinary BPSK,
particularly at higher values.
Using DPSK avoids the need for possibly complex carrier-recovery schemes to provide an
accurate phase estimate and can be an attractive alternative to ordinary PSK.
In optical communications, the data can be modulated onto the phase of a laser in a
differential way. The modulation is a laser which emits a continuous wave, and a Mach-
Zehnder modulator which receives electrical binary data. For the case of BPSK for
example, the laser transmits the field unchanged for binary '1', and with reverse polarity
for '0'. The demodulator consists of a delay line interferometer which delays one bit, so
two bits can be compared at one time. In further processing, a photodiode is used to
transform theoptical field into an electric current, so the information is changed back
into its original state.
The bit-error rates of DBPSK and DQPSK are compared to their non-differential
counterparts in the graph to the right. The loss for using DBPSK is small enough
compared to the complexity reduction that it is often used in communications systems
that would otherwise use BPSK. For DQPSK though, the loss in performance compared to
ordinary QPSK is larger and the system designer must balance this against the reduction
in complexity.
Example: Differentially encoded BPSK
So only changes state (from binary '0' to binary '1' or from binary '1' to binary '0')
if is a binary '1'. Otherwise it remains in its previous state. This is the description of
differentially encoded BPSK given above.
The received signal is demodulated to yield ±1 and then the differential decoder
reverses the encoding procedure and produces:
since binary subtraction is the same as binary addition.
Channel capacity
Given a fixed bandwidth, channel capacity vs. SNRfor some common modulation
schemes
Like all M-ary modulation schemes with M = 2b symbols, when given exclusive access to
a fixed bandwidth, the channel capacity of any phase shift keying modulation scheme
rises to a maximum of b bits per symbol as the signa
Unit 3
Quadrature Phase Shift Keying
Quadrature PSK/ M-ary PSK
Sometimes this is known as quadriphase PSK, 4-PSK, or 4-QAM. (Although the root
concepts of QPSK and 4-QAM are different, the resulting modulated radio waves are
exactly the same.) QPSK uses four points on the constellation diagram, equispaced
around a circle. With four phases, QPSK can encode two bits per symbol, shown in the
diagram with Gray coding to minimize the bit error rate (BER) — sometimes
misperceived as twice the BER of BPSK.
The mathematical analysis shows that QPSK can be used either to double the data rate
compared with a BPSK system while maintaining thesame bandwidth of the signal, or
to maintain the data-rate of BPSK but halving the bandwidth needed. In this latter case,
the BER of QPSK isexactly the same as the BER of BPSK - and deciding differently is a
common confusion when considering or describing QPSK. The transmitted carrier can
undergo numbers of phase changes.
Given that radio communication channels are allocated by agencies such as the Federal
Communication Commission giving a prescribed (maximum) bandwidth, the advantage
of QPSK over BPSK becomes evident: QPSK transmits twice the data rate in a given
bandwidth compared to BPSK - at the same BER. The engineering penalty that is paid is
that QPSK transmitters and receivers are more complicated than the ones for BPSK.
However, with modern electronics technology, the penalty in cost is very moderate.
As with BPSK, there are phase ambiguity problems at the receiving end,
and differentially encoded QPSK is often used in practice.
Implementation[edit]
The implementation of QPSK is more general than that of BPSK and also indicates the
implementation of higher-order PSK. Writing the symbols in the constellation diagram in
terms of the sine and cosine waves used to transmit them:
This yields the four phases π/4, 3π/4, 5π/4 and 7π/4 as needed.
This results in a two-dimensional signal space with unit basis functions
The first basis function is used as the in-phase component of the signal and the second
as the quadrature component of the signal.
Hence, the signal constellation consists of the signal-space 4 points
The factors of 1/2 indicate that the total power is split equally between the two carriers.
Comparing these basis functions with that for BPSK shows clearly how QPSK can be
viewed as two independent BPSK signals. Note that the signal-space points for BPSK do
not need to split the symbol (bit) energy over the two carriers in the scheme shown in
the BPSK constellation diagram.
QPSK systems can be implemented in a number of ways. An illustration of the major
components of the transmitter and receiver structure are shown below.
The binary data stream is split into the in-phase and quadrature-phase components.
These are then separately modulated onto two orthogonal basis functions. In this
implementation, two sinusoids are used. Afterwards, the two signals are superimposed,
and the resulting signal is the QPSK signal. Note the use of polar non-return-to-zero
encoding. These encoders can be placed before for binary data source, but have been
placed after to illustrate the conceptual difference between digital and analog signals
involved with digital modulation.
However, in order to achieve the same bit-error probability as BPSK, QPSK uses twice
the power (since two bits are transmitted simultaneously).
If the signal-to-noise ratio is high (as is necessary for practical QPSK systems) the
probability of symbol error may be approximated:
The modulated signal is shown below for a short segment of a random binary data-
stream. The two carrier waves are a cosine wave and a sine wave, as indicated by the
signal-space analysis above. Here, the odd-numbered bits have been assigned to the in-
phase component and the even-numbered bits to the quadrature component (taking
the first bit as number 1). The total signal — the sum of the two components — is shown
at the bottom. Jumps in phase can be seen as the PSK changes the phase on each
component at the start of each bit-period. The topmost waveform alone matches the
description given for BPSK above.
In frequency-shift keying, the signals transmitted for marks (binary ones) and spaces
(binary zeros) are
s1(t) = A cos(ω1t + θc), 0<t≤T
s2(t) = A cos(ω2t + θc), 0<t≤T
respectively. This is called a discontinuous phase FSK system, because the phase of the
signal is discontinuous at the switching times.
If the bit intervals and the phases of the signals can be determined (usually by the use of
a phase-lock loop), then the signal can be decoded by two separate matched filters:
The first filter is matched to the signal s1(t), and the second to s2(t). Under the
assumption that the signals are mutually orthogonal, the output of one of the matched
filters will be E and the other zero (where E is the energy of the signal). Decoding of the
bandpass signal can therefore be achieved by subtracting the outputs of the two filters,
and comparing the result to a threshold.
If the signal s1(t) is present then the resulting output will be + E, and if s2(t) is present it
will be − E. Since the noise variance at each filter output is Eη/2, the noise in the
difference signal will be doubled, namely σ2 = Eη. Since the overall output variation is 2E,
the probability of error is
The overall performance of a matched filter receiver in this case is therefore the same as
for ASK.
This can be viewed as the linear superposition of two OOK signals, one delayed by T
seconds with respect to the other. Since the spectrum of an OOK signal is
where M(ω) is the transform of the baseband signal m(t), the spectrum of the FSK signal
is the superposition of two of these spectra, one for ω1 = ωc − ∆ω
The bandwidth of the periodic FSK signal is then 2∆f + 2B, with B the bandwidth of the
baseband signal.
Nonsynchronous or envelope detection can be performed for FSK signals. In this case
the receiver takes the following form:
The bit error probability can be shown to be
which under normal operating conditions corresponds to less than a 1dB penalty over
coherent detection. In practice almost all FSK receivers are of this 3form.
In order for envelope detection to be successful, the peaks in the frequency domain at
ωc − ∆ω and ωc + ∆ω must be widely separated with respect to the bandwidth of the
baseband signal. This requires 2∆f T > 1.
A more practical alternative to discontinuous-phase FSK systems are continuous-phase
FSK systems, where a polar binary baseband signal is provided as the input to a voltage-
controlled oscillator (VCO):
Overly sharp transitions in the phase of the output signal can be restricted by band-
limiting the input to the VCO.
Note that FSK is not true frequency modulation, and does not provide the wide-band
noise reduction properties associated with FM.