0% found this document useful (0 votes)
35 views

Analof To Digital Conversion-InFO Theory

The document discusses analog to digital conversion and digital communication. It explains that most communication signals are analog but digital technology is needed for long distance communication. The advantages of digital communication include less noise distortion and easier signal processing. The key components of a digital communication system are also outlined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Analof To Digital Conversion-InFO Theory

The document discusses analog to digital conversion and digital communication. It explains that most communication signals are analog but digital technology is needed for long distance communication. The advantages of digital communication include less noise distortion and easier signal processing. The key components of a digital communication system are also outlined.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Digital Communication - Analog to Digital

Most signals used in our day-to-day communication are analog (sound/voice). But we live in a digital
world: The computer, the internet, cellular mobile systems etc are digital technology. The use of
analog signals for long distance communications suffers from many losses such as distortion,
interference, and other losses including security breach. Digitization helps to overcome or reduce
theses problems, allowing the communication to be more clear and accurate without losses.
Digitization is thus a necessity in the telecommunication world.

The Necessity of Digitization


Digital signals propagate more efficiently than analog signals, largely because digital impulses are well
defined and orderly. They're also easier for electronic circuits to distinguish from noise, which is
chaotic. That is the chief advantage of digital communication modes.

Computers "talk" and "think" in terms of binary digital data. While a microprocessor can analyze
analog data, it must be converted into digital form for the computer to make sense of it.

The following figure indicates the difference between analog and digital signals. The digital signals
consist of 1s and 0s which indicate High and Low values respectively.

Advantages of Digital Communication


As the signals are digitized, there are many advantages of digital communication over analog
communication, such as :

 The effect of distortion, noise, and interference is much less in digital signals as they are less
affected.
 Digital circuits are more reliable.
 Digital circuits are easy to design and cheaper than analog circuits.
 The hardware implementation in digital circuits, is more flexible than analog.
 The occurrence of cross-talk is very rare in digital communication.
 The signal is un-altered as the pulse needs a high disturbance to alter its properties, which is
very difficult.
 Signal processing functions such as encryption and compression are employed in digital
circuits to maintain the secrecy of the information.
 The probability of error occurrence is reduced by employing error detecting and error
correcting codes.
 Spread spectrum technique is used to avoid signal jamming.
 Combining digital signals using Time Division Multiplexing TDM
 is easier than combining analog signals using Frequency Division Multiplexing FDM
 The configuring process of digital signals is easier than analog signals.
 Digital signals can be saved and retrieved more conveniently than analog signals.
 Many of the digital circuits have almost common encoding techniques and hence similar
devices can be used for a number of purposes.
 The capacity of the channel is effectively utilized by digital signals.

Elements of Digital Communication


The elements which form a digital communication system is represented by the following block
diagram for the ease of understanding.

Source: The source can be an analog signal. Example: A Sound signal

Input Transducer: This is a transducer which takes a physical input and converts it to an electrical
signal (Example: microphone). This block also consists of an analog to digital converter where a digital
signal is needed for further processes. A digital signal is generally represented by a binary sequence.

Source Encoder: The source encoder compresses the data into minimum number of bits. This process
helps in effective utilization of the bandwidth. It removes the redundant bits
unnecessaryexcessbits,i.e.,zeroes

Channel Encoder: The channel encoder, does the coding for error correction. During the transmission
of the signal, due to the noise in the channel, the signal may get altered and hence to avoid this, the
channel encoder adds some redundant bits to the transmitted data. These are the error correcting
bits.

Digital Modulator: The signal to be transmitted is modulated here by a carrier. The signal is also
converted to analog from the digital sequence, in order to make it travel through the channel or
medium.
Channel: The channel or a medium, allows the analog signal to transmit from the transmitter end to
the receiver end.

Digital Demodulator: This is the first step at the receiver end. The received signal is demodulated as
well as converted again from analog to digital. The signal gets reconstructed here.

Channel Decoder: The channel decoder, after detecting the sequence, does some error corrections.
The distortions which might occur during the transmission, are corrected by adding some redundant
bits. This addition of bits helps in the complete recovery of the original signal.

Source Decoder: The resultant signal is once again digitized by sampling and quantizing so that the
pure digital output is obtained without the loss of information. The source decoder recreates the
source output.

Output Transducer: This is the last block which converts the signal into the original physical form,
which was at the input of the transmitter. It converts the electrical signal into physical output
(Example: loud speaker).

Output Signal: This is the output which is produced after the whole process. Example − The sound
signal received.

Analog to Digital Converter (ADC)


Ananlog to digital conversion is the process of converting a continuous time, continuous value (analog)
signal to a discrete time-discrete value (digital) signal. Thus, ADC or Analog to Digital Converter is an
electronic device or circuit that is used to convert the continuous analog electronic or electrical signal
into a discrete digital signal. We know that when physical quantities such as sound, flow, temperature,
etc are converted into an electrical or electronic signal it is generated in the form of an analog signal.
But our processor works with digital signals only. Furthermore there are many advantages of digital
signal over analog signal, so we need to convert the analog signal into a digital signal.

Analog to Digital Converter (ADC) Block Diagram

The main blocks or parts are,

Sampler: The sampler is a circuit that takes samples from the continuous analog signal according to
its sample frequency. The sampling frequency is set according to the requirement. Basically, the
sampler converts the continuous-time-continuous amplitude signal into a continuous amplitude-
discrete time signal.

Holding Circuit: The holding circuit does not convert anything it just holds the samples generated by
the sampler circuit. It holds the first sample until the next sample comes from the sampler. Once the
new sample comes from the sampler to the holding circuit it releases the old sample to its next block.

Quantizer: The quantizer quantized the signal which means it converts the continuous amplitude-
discrete time signal into a discrete time-discrete amplitude signal. It breaks or splits the samples into
small parts.

Encoder: The encoder is the circuit that actually generates the digital signal into binary form. The
output from the encoder is fed to the next circuitry. Here, is the end of the analog to a digital circuit.

In order to design an ADC the following factors must be considered:

(I) Effective number of bits (ENOB): is a measure of the dynamic range of an analog-to-digital
converter (ADC), digital-to-analog converter, or their associated circuitry. The resolution of an ADC is
specified by the number of bits used to represent the analog value. However, real signals have noise,
and real circuits are imperfect and introduce additional noise and distortion. Those imperfections
reduce the number of bits of accuracy in the ADC. The ENOB describes the effective resolution of the
system in bits. An ADC may have a 12-bit resolution, but the effective number of bits, when used in a
system, may be 9.5.

(II) Sampling Rate: This is a very crucial factor in the design of an ADC. This rate is one of the key
factors for preserving the fidelity of the signal being converted. In order for the analog signal to be
retrieved accurately at the receiving end, the sampling rate during conversion to digital form must be
correct.

It refers to the number of samples or data points taken per unit of time from an analog signal to
convert it into a digital format. It is also known as sampling frequency. It is measured in Hertz (Hz).
The formula for sampling rate or sampling frequency is given by:

𝑠𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑟𝑎𝑡𝑒 = 1 𝑇 = 𝑓 , …………………………………………..(1)

where 𝑇 is the sampling period and 𝑓 is the sampling frequency .

The Nyquist theorem describes the correct sampling rate for an analog to digital conversion. The
Nyquist theorem defines minimum sampling rate required to accurately capture an analog signal in
digital form without information loss as twice the maximum frequency component present in the
analog signal. Mathematically it can be represented as:

𝑓 = 2𝑓 , …………………………………………………………(2)

This is also referred to as the Nyquist frequency or Nyquist rate. If the sampling rate is less than the
Nyquist rate there is an overlapping of the signals, a situation called Aliasing. When this occurs the
receiver is not able to reconstruct the original signal accurately.
The Sampling Process
Sampling: It is the process by which, CTS (continuous time signal) is converted into DTS (discrete
time signal) by taking the signal values at some distinct points in time, meaning that this is used
to take samples of analog signals at some points in time (regular or irregular). Therefore:
Sample: is the numeric value of an analog signal at a specific time. It is just the signal’s measured
amplitude at a particular time and converting it to a digital representation. Sample can also be
said to be a piece of data taken from the whole data which is continuous in the time domain.
When a source generates an analog signal and if that has to be digitized, having 1s and 0s i.e.,
High or Low, the signal has to be discretized in time. This discretization of analog signal is called
sampling. The following figure indicates a continuous-time signal and a sampled signal:

Aliasing
Aliasing can be referred to as “the phenomenon of a high-frequency component in the spectrum
of a signal, taking on the identity of a low-frequency component in the spectrum of its sampled
version.” The corrective measures taken to reduce the effect of Aliasing are
 In the transmitter section, a low pass anti-aliasing filter is employed, before the sampler,
to eliminate the high frequency components, which are unwanted.
 The signal which is sampled after filtering, is sampled at a rate slightly higher than the
Nyquist rate.
This choice of having the sampling rate higher than Nyquist rate, also helps in the easier design
of the reconstruction filter at the receiver.
(iii) Quantization Error: The difference between an input value and its quantized value is called
a Quantization Error. A Quantizer is a logarithmic function that performs Quantization rounding
off the value
Quantization Noise: It is a type of quantization error, which usually occurs in analog audio signal,
while quantizing it to digital. For example, in music, the signals keep changing continuously,
where a regularity is not found in errors. Such errors create a wideband noise called as
Quantization Noise.

The Quantization Process


The digitization of analog signals involves the rounding off of the values which are approximately equal
to the analog values. The method of sampling chooses a few points on the analog signal and then
these points are joined to round off the value to a near stabilized value. Such a process is called as
Quantization.

The quantizing of an analog signal is done by discretizing the sampled signal with a number of
quantization levels. Quantization is representing the sampled values of the amplitude by a finite set
of levels, which means converting a continuous-amplitude sample into a discrete-time signal.

Simply put, Quantization is the process to represent a continuous-valued signal with a limited set of
discrete values. In other words, it involves mapping a continuous signal’s infinite range of potential
values to a finite collection of discrete values.

The following figure shows how an analog signal gets quantized. The blue line represents analog signal
while the brown one represents the quantized signal.
(iv) Resolution/Step size/Width step: It is defined as the minimum possible change at the
output for any change in the input of the analog signal. OR as the smallest incremental voltage that
can be recognized and thus causes a change in the digital output. It is expressed as the number of bits
output by the ADC. Therefore, an ADC which converts the analog signal to a 12-bit digital value has a
resolution of 12 bits. Mathematically is is defined as

𝑅𝐸𝑆𝑂𝐿𝑈𝑇𝐼𝑂𝑁 = =

Where L is the quantization level, 𝑉 ,𝑉 ,𝑉 are different voltage levels of the quantizer.

(v) Dither: Dithering is the process of adding a small amount of random noise to a digital audio signal
in order to reduce the distortion caused by quantization error. Historically, this technique has been
used mainly in the audio field in order to improve the sound of digital audio.

(vi) Accuracy: The accuracy of an ADC (Analog to Digital Converter) is how close you are to represent
the true signal in your measurement.

Other important considerations are non-linearity, Jitter, Aliasing, oversampling, the speed and
precision of the ADC.

Types of Analog to Digital Converters


ADC is available in different types and some of the types of analog to digital converters include:

 Dual Slope A/D Converter


 Flash A/D Converter
 Successive Approximation A/D Converter
 Semi-flash ADC
 Sigma-Delta ADC
 Pipelined ADC

Assignment: Describe the highlighted methods of


implementing ADC- Submit before Monday (26-02-2024)
class:
Encoding
The final block in ADC is an encoder that converts the signal from digital form to binary. We know that
a digital device works by using binary signals. So it is required to change the signal from digital to
binary with the help of an encoder. After each sample is quantized and the number of bits per sample
is decided, each sample can be changed to an n bit code. Encoding also minimizes the bandwidth used.

Encoding is the process of converting the data or a given sequence of characters, symbols, alphabets
etc., into a specified format, for the secured transmission of data. Decoding is the reverse process of
encoding which is to extract the information from the converted format.

Data Encoding: Encoding is the process of using various patterns of voltage or current levels to
represent 1s and 0s of the digital signals on the transmission link.

Encoding Techniques
The data encoding technique is divided into the following types, depending upon the type of data
conversion.

Analog data to Analog signals − The modula on techniques such as Amplitude Modula on, Frequency
Modulation and Phase Modulation of analog signals, fall under this category.

Analog data to Digital signals − This process can be termed as digitization, which is done by Pulse
Code Modulation PCM. Hence, it is nothing but digital modulation. As we have already discussed,
sampling and quantization are the important factors in this. Delta Modulation gives a better output
than PCM.

Digital data to Analog signals − The modula on techniques such as Amplitude Shi Keying ASK,
Frequency Shift Keying FSK, Phase Shift Keying PSK, , etc., fall under this category.

Digital data to Digital signals − These are the several ways to map digital data to digital signals. Some
of them are –

Non Return to Zero NRZ

NRZ Codes has 1 for High voltage level and 0 for Low voltage level. The main behavior of NRZ codes is
that the voltage level remains constant during bit interval. The end or start of a bit will not be indicated
and it will maintain the same voltage state, if the value of the previous bit and the value of the present
bit are same. The following figure explains the concept of NRZ coding.
If the above example is considered, as there is a long sequence of constant voltage level and the clock
synchronization may be lost due to the absence of bit interval, it becomes difficult for the receiver to
differentiate between 0 and 1.

There are two variations in NRZ namely −

NRZ - L NRZ–LEVEL

There is a change in the polarity of the signal, only when the incoming signal changes from 1 to 0 or
from 0 to 1. It is the same as NRZ, however, the first bit of the input signal should have a change of
polarity.

NRZ - I NRZ–INVERTED

If a 1 occurs at the incoming signal, then there occurs a transition at the beginning of the bit interval.
For a 0 at the incoming signal, there is no transition at the beginning of the bit interval.

NRZ codes has a disadvantage that the synchronization of the transmitter clock with the receiver clock
gets completely disturbed, when there is a string of 1s and 0s. Hence, a separate clock line needs to
be provided.

Bi-phase Encoding

The signal level is checked twice for every bit time, both initially and in the middle. Hence, the clock
rate is double the data transfer rate and thus the modulation rate is also doubled. The clock is taken
from the signal itself. The bandwidth required for this coding is greater. There are two types of Bi-
phase Encoding.

 Bi-phase Manchester
 Differential Manchester

Bi-phase Manchester

In this type of coding, the transition is done at the middle of the bit-interval. The transition for the
resultant pulse is from High to Low in the middle of the interval, for the input bit 1. While the transition
is from Low to High for the input bit 0.
Differential Manchester

In this type of coding, there always occurs a transition in the middle of the bit interval. If there occurs
a transition at the beginning of the bit interval, then the input bit is 0. If no transition occurs at the
beginning of the bit interval, then the input bit is 1.

The following figure illustrates the waveforms of NRZ-L, NRZ-I, Bi-phase Manchester and Differential
Manchester coding for different digital inputs.

We also have block coding and line coding techniques.

Digital Communication - Information Theory


Information is the source of a communication system, whether it is analog or digital. Information
theory is a mathematical approach to the study of coding of information along with the quantification,
storage, and communication of information.

Conditions of Occurrence of Events


If we consider an event, there are three conditions of occurrence.

 If the event has not occurred, there is a condition of uncertainty.

 If the event has just occurred, there is a condition of surprise.

 If the event has occurred, a time back, there is a condition of having some information.

These three events occur at different times. The difference in these conditions help us gain knowledge
on the probabilities of the occurrence of events.

Entropy
When we observe the possibilities of the occurrence of an event, how surprising or uncertain it would
be, it means that we are trying to have an idea on the average content of the information from the
source of the event.
Entropy can be defined as a measure of the average information content per source symbol. Claude
Shannon, the “father of the Information Theory”, provided a formula for it as –

Where pi is the probability of the occurrence of character number i from a given stream of characters
and b is the base of the algorithm used. Hence, this is also called as Shannon’s Entropy.

The amount of uncertainty remaining about the channel input after observing the channel output, is
called as Conditional Entropy. It is denoted by 𝐻(𝑥 ∣ 𝑦)

Mutual Information
Let us consider a channel whose output is Y and input is X
Let the entropy for prior uncertainty be 𝑋 = 𝐻𝑥; (This is assumed before the input is applied)
To know about the uncertainty of the output, after the input is applied, let us consider Conditional
Entropy, given that 𝑌 = 𝑦𝑘

Now, considering both the uncertainty conditions before and after applying the inputs, we come to
know that the difference,
i.e. 𝐻(𝑥) − 𝐻(𝑥 ∣ 𝑦)
must represent the uncertainty about the channel input that is resolved by observing the channel
output.
This is called as the Mutual Information of the channel. Denoting the Mutual Information 𝑎𝑠 𝐼(𝑥; 𝑦),
we can write the whole thing in an equation, as follows
𝐼(𝑥; 𝑦) = 𝐻(𝑥) − 𝐻(𝑥 ∣ 𝑦)
Hence, this is the equational representation of Mutual Information.

Properties of Mutual information


These are the properties of Mutual information.

Mutual information of a channel is symmetric.


𝐼(𝑥; 𝑦) = 𝐼(𝑦; 𝑥)

Mutual information is non-negative.


𝐼(𝑥; 𝑦) ≥ 0

Mutual information can be expressed in terms of entropy of the channel output.


𝐼(𝑥; 𝑦) = 𝐻(𝑦) − 𝐻(𝑦 ∣ 𝑥)
Where 𝐻(𝑦 ∣ 𝑥) is a conditional entropy

Mutual information of a channel is related to the joint entropy of the channel input and the channel
output.
𝐼(𝑥; 𝑦) = 𝐻(𝑥) + 𝐻(𝑦) − 𝐻(𝑥, 𝑦)
Where the joint entropy 𝐻(𝑥, 𝑦) is defined by

Channel Capacity
We have so far discussed mutual information. The maximum average mutual information, in an instant
of a signaling interval, when transmitted by a discrete memoryless channel, the probabilities of the
rate of maximum reliable transmission of data, can be understood as the channel capacity.

It is denoted by 𝐶 and is measured in bits per channel use.

Discrete Memoryless Source


A source from which the data is being emitted at successive intervals, which is independent of previous
values, can be termed as discrete memoryless source. This source is discrete as it is not considered for
a continuous time interval, but at discrete time intervals. This source is memoryless as it is fresh at
each instant of time, without considering the previous values.
The Code produced by a discrete memoryless source, has to be efficiently represented, which is an
important problem in communications. For this to happen, there are code words, which represent
these source codes.
For example, in telegraphy, we use Morse code, in which the alphabets are denoted by Marks and
Spaces. If the letter E is considered, which is mostly used, it is denoted by “.” Whereas the letter Q
which is rarely used, is denoted by “--.-”

Let us take a look at the block diagram


Where 𝑆𝑘 is the output of the discrete memoryless source and 𝑏𝑘 is the output of the source encoder
which is represented by 0s and 1s.
The encoded sequence is such that it is conveniently decoded at the receiver.

Let us assume that the source has an alphabet with 𝑘 different symbols and that the 𝑘𝑡ℎ symbol 𝑆𝑘
occurs with the probability 𝑃𝑘, where 𝑘 = 0, 1 … 𝑘 − 1.

Let the binary code word assigned to symbol 𝑆 , by the encoder having length 𝑙 , measured in bits.

Hence, we define the average code word length 𝐿 of the source encoder as:

L represents the average number of bits per source symbol;

If 𝐿𝑚𝑖𝑛 = 𝑚𝑖𝑛𝑖𝑚𝑢𝑚𝑝𝑜𝑠𝑠𝑖𝑏𝑙𝑒𝑣𝑎𝑙𝑢𝑒𝑜𝑓𝐿

Then coding efficiency can be defined as

However, the source encoder is considered efficient when η=1

Given a discrete memoryless source of entropy 𝐻(𝛿):


the average code-word length L for any source encoding is bounded as

This source coding theorem is called as noiseless coding theorem as it establishes an error-free
encoding. It is also called as Shannon’s first theorem.
The noise present in a channel creates unwanted errors between the input and the output sequences
of a digital communication system. The error probability should be very low, nearly ≤ 10 for a
reliable communication.

The channel coding in a communication system, introduces redundancy with a control, so as to


improve the reliability of the system. The source coding reduces redundancy to improve the efficiency
of the system.

Channel coding consists of two parts of action.


 Mapping incoming data sequence into a channel input sequence.
 Inverse Mapping the channel output sequence into an output data sequence.

The final target is that the overall effect of the channel noise should be minimized. The mapping is
done by the transmitter, with the help of an encoder, whereas the inverse mapping is done by the
decoder in the receiver.

Channel Coding
Let us consider a discrete memoryless channel 𝛿 with Entropy 𝐻 𝛿
 𝑇𝑠 indicates the symbols that 𝛿 gives per second
 Channel capacity is indicated by 𝐶
 Channel can be used for every 𝑇𝑐 secs
 Hence, the maximum capability of the channel is 𝐶/𝑇𝑐

Hence, the maximum rate of the transmission is equal to the critical rate of the channel capacity, for
reliable error-free messages, which can take place, over a discrete memoryless channel. This is called
as Channel coding theorem.

You might also like