0% found this document useful (0 votes)
5 views41 pages

Unit 5 - Part-Ii

ADC notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views41 pages

Unit 5 - Part-Ii

ADC notes
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

UNIT –V

Spread Spectrum Techniques and


Information theory Concepts

Part-II
Measures of Information
• Information is a measure of uncertainty. The less is the
probability of occurrence of a certain message, the higher is
the information.

• Since the information is closely associated with the


uncertainty of the occurrence of a particular symbol, When
the symbol occurs the information associated with its
occurrence is defined as:
1
I k = log ( ) = - log(P k )
Pk
where Pk is the probabilit y of occurrence of symbol ' k'
and I k is the informatio n carried by symbol ' k'.
Entropy
• Entropy is defined in terms of probabilistic
behaviour of a source of information
• In information theory the source output are
discrete random variables that have a certain
fixed finite alphabet with certain probabilities
– Entropy is an average information content for the
given source symbol. (bits/message)
K −1
1
H (k) =  pk log 2 ( )
k =0 pk
Example

Find the entropy of the source alphabets {S0,S1,S2} with respective


probabilities

Solution:
W.K.T , K −1
1
H (k) =  pk log 2 ( )
k =0 pk
Rate of information

• If a source generates at a rate of ‘r’ messages per


second, the rate of information ‘R’ is defined as
the average number of bits of information per
second.

• ‘H’ is the average number of bits of information


per message.

Hence, R = rH bits/sec
Source Coding
• Source coding (lossless data compression) means that we will
remove redundant information from the signal prior the
transmission.
• Basically this is achieved by assigning short descriptions to the
most frequent outcomes of the source output and vice versa.
• The common source-coding schemes are
huffman coding
Shannon-Fano Coding
lempel-ziv coding
prefix coding
What is need for source coding ?

• For analog sources , the sources coding is related to amplitude


distribution and the function of the source waveform.

• For discrete sources, the sources coding is related to information


content and correlation among the symbols of the source.

Source encoding must satisfy two functional requirements,


1.Efficient representation of data in binary form
2. It must be uniquely decodable.
Source Coding Theorem
• Source coding theorem states that the output of any information
source having entropy H units per symbol can be encoded into
an alphabet having N symbols in such a way that the source
symbols are represented by code words having a weighted
average length not less than H/log N.

• Hence source coding theorem says that encoding of messages


from a source with entropy H can be done, bounded by the
fundamental information theoretic limitation that the Minimum
average number of symbols/message is H/logN.
The average code word length , of source encoder is given as,

- represents average number of bits per source symbol

The code Efficiency is given by,

Lmin - represents the minimum possible value of

The average code word length for any distortionless source encoding
scheme is bounded as
According to source coding theorem, the entropy represents a
fundamental limit on the average number of bits per source symbol , but
no smaller than the entropy ,Hence we can rewrite the efficiency of a
source coder in terms of the entropy as
Huffman Coding
With the Huffman code in the binary case the two least probable
source output symbols are joined together, resulting in a new
message alphabet with one less symbol

Application : JPEG, MPEG, MP3

ADVANTAGES:
• uniquely decodable code
• smallest average codeword length
DISADVANTAGES:
• LARGE tables give complexity
• sensitive to channel errors
Procedure
• Create a list for the symbols, in decreasing order of
probability. The symbols with the lowest probability are
assigned a ‘0’ and a ‘1’.

• These two symbols are combined into a new symbol with


the probability equal to the sum of their individual
probabilities. The new symbol is placed in the list as per its
probability value.

• The procedure is repeated until we are left with 2 symbols


only for which 0 and 1 are assigned.

• Huffman code is the bit sequence obtained by working


backwards and tracking sequence of 0’s and 1’s assigned to
that symbol and its successors.
Huffman Coding: Example

Source Symbol
Symbol Probability
sk pk
s0 0.1
s1 0.2
s2 0.4
s3 0.2
s4 0.1
Solution
Source Stage I
Symbol
sk
s2 0.4

s1 0.2

s3 0.2

s0 0.1

s4 0.1
Solution
Source Stage I Stage II
Symbol
sk
s2 0.4 0.4

s1 0.2 0.2

s3 0.2 0.2

s0 0.1 0.2

s4 0.1
Solution
Source Stage I Stage II Stage III
Symbol
sk
s2 0.4 0.4 0.4

s1 0.2 0.2 0.4

s3 0.2 0.2 0.2

s0 0.1 0.2

s4 0.1
Solution
Source Stage I Stage II Stage III Stage IV
Symbol
sk
s2 0.4 0.4 0.4 0.6

s1 0.2 0.2 0.4 0.4

s3 0.2 0.2 0.2

s0 0.1 0.2
s4 0.1
Solution
Source Stage I Stage II Stage III Stage IV
Symbol
sk
0
s2 0.4 0.4 0.4 0.6
0
s1 0.2 0.2 0.4 0.4 1
0
s3 0.2 0.2 0.2 1
0
s0 0.1 0.2 1

s4 0.1 1
Solution
Source Stage I Stage II Stage III Stage IV Code
Symbol
sk
0
s2 0.4 0.4 0.4 0.6 00
0
s1 0.2 0.2 0.4 0.4 1
10
0
s3 0.2 0.2 0.2 1
11
0
s0 0.1 0.2 1
010

s4 0.1 1
011
The average code word length is,

The entropy is,


Exercise
Find Huffman code for the following source

Symbol Probability
h 0.1
e 0.1
l 0.4
o 0.25
w 0.05
r 0.05
d 0.05
Shannon-Fano Coding

• In Shannon–Fano coding, the symbols are arranged in order


from most probable to least probable, and then divided into
two sets whose total probabilities are as close as possible to
being equal. All symbols then have the first digits of their
codes assigned; symbols in the first set receive "0" and
symbols in the second set receive "1".

• As long as any sets with more than one member remain, the
same process is repeated on those sets, to determine successive
digits of their codes. When a set has been reduced to one
symbol, of course, this means the symbol's code is complete
and will not form the prefix of any other symbol's code.
Shannon’s Channel Coding theorem

• The Shannon theorem states that given a noisy channel with


channel capacity C and information transmitted at a rate R, then if
R < C there exist codes that allow the probability of error at the
receiver to be made arbitrarily small. This means that theoretically,
it is possible to transmit information nearly without error at any
rate below a limiting rate, C.

• The converse is also important. If R > C, an arbitrarily small


probability of error is not achievable. All codes will have a
probability of error greater than a certain positive minimal level,
and this level increases as the rate increases. So, information
cannot be guaranteed to be transmitted reliably across a channel at
rates beyond the channel capacity.
Shannon Coding: Example

A discrete memoryless source has six source symbols S0, S1, S2, S3, S4
and S5 with their probability assignments P(S0) = 0.30, P(S1) = 0.25,
P(S2) = 0.20, P(S3) = 0.12, P(S4) = 0.08 and P(S5) = 0.05. Encode the
source with Shannon-Fano codes. Calculate its efficiency.
Solution
The average code word length is,

= 0.30 (2) + 0.25 (2) + 0.20 (2) + 0.12(3) + 0.08(4) + 0.05(4)


= 2.38 bits/symbol

The entropy is,

=
Linear Block Codes
Channel Coding

Why?
To increase the resistance of digital communication systems to
channel noise via error control coding

How?
By mapping the incoming data sequence into a channel input
sequence and inverse mapping the channel output sequence into an
output data sequence in such a way that the overall effect of channel
noise on the system is minimized

Redundancy is introduced in the channel encoder so as to reconstruct the


original source sequence as accurately as possible.

27
The implication of Error Control Coding

• Addition of redundancy implies the need for increased transmission


bandwidth

• It also adds complexity in the decoding operation

• Therefore, there is a design trade-off in the use of error-control coding


to achieve acceptable error performance considering bandwidth and
system complexity.

Types of Error Control Coding


• Linear Block codes
• Convolutional codes

28
Linear Block Codes

An (n,k) block code indicates that the codeword has n number of bits and
k is the number of bits for the original binary message

A code is said to be linear if any two code words in the code can be
added in modulo-2 arithmetic to produce a third code word in the code
Linear Block Codes
Codes in which the message bits are transmitted in an unaltered form.
Example : Consider an (n,k) linear block code

There are 2k number of distinct message blocks and 2n number of distinct code words

Let m0,m1,….mk-1 constitute a block of k-bits binary message

By applying this sequence of message bits to a linear block encoder, it adds n-k bits to
the binary message

Let b0,b1,….bn-k-1 constitute a block of n-k-bits redundancy

This will produce an n-bits code word

Let c0,c1,….cn-1 constitute a block of n-bits code word

Using vector representation they can be written in a row vector notation respectively as

(c0 c1 …. cn ) , (m0 m1 …. mk-1 ) and (b0 b1 …. bn-k-1 )

30
Linear Block Codes

Using matrix representation, we can define

c, the 1-by-n code vector = [c0 c1 …. cn ]


m, the 1-by-k message vector =[m0 m1 …. mk-1 ]
b, the 1-by-(n-k) parity vector = [b0 b1 …. bn-k-1 ]

With a systematic structure, a code word is divided into 2 parts. 1 part occupied by the
binary message only and the other part by the redundant (parity) bits.

The (n-k) left-most bits of a code word are identical to the corresponding parity bits
The k right-most bits of a code word are identical to the corresponding message bits

31
Linear Block Codes

In matrix form, we can write the code vector,c as a partitioned row vector in terms of
vectors m and b

c=[b m]

Given a message vector m, the corresponding code vector, c for a systematic linear (n,k)
block code can be obtained by a matrix multiplication

c=m.G

Where G is the k-by-n generator matrix.

32
Linear Block Codes

The generator matrix, G


G, the k-by-n generator matrix has the general structure

G = [Ik P]

Where Ik is the k-by-k identity matrix and

P is the k-by-(n-k) coefficient matrix

1 0 …. 0 P00 P01 ….. P0,n-k-1


Ik= 0 1 …. 0 P10 P11 ….. P1,n-k-1
P=

0 0 …. 1 Pk-1, 0 Pk-1,1 ….. Pk-1,n-k-1

The identity matrix simply reproduces the message vector for the first
k elements of c
The coefficient matrix generates the parity vector,b via b=m.P
The elements of P are found via research on coding.
33
Cyclic Codes

A subclass of linear codes having a cyclic structure.

The code vector can be expressed in the form

c = ( cn-1 cn-2 ……c1 c0 )

A new code vector in the code can be produced by cyclic shifting of


another code vector.
For example, a cyclic shift of all n bits one position to the left gives

c’ = ( cn-2 cn-3 ……c1 c0 cn-1)


A second shift produces another code vector, c”

c” = ( cn-3 cn-4 ……c1 c0 cn-1 cn-2)

34
Cyclic Codes

The cyclic property can be treated mathematically by associating a code


vector, c with the code polynomial, c(X)

c(X) = c0 + c1X + c2X2+……cn-1Xn-1

The power of X denotes the positions of the codeword bits.

The coefficients are either 1s and 0s.


An (n,k) cyclic code is defined by a generator polynomial, g(X)

g(X) = Xn-k + gn-k-1Xn-k-1 + ……. + g1X + 1

The coefficient g are such that g(X) is a factor of Xn + 1

35
Cyclic Codes – Encoding Procedure

To encode an (n,k) cyclic code

1. Multiply the message polynomial , m(X) by Xn-k


2. Divide Xn-k.m(X) by the generator polynomial, g(X) to obtain the
remainder polynomial, b(X)

3. Add b(X) to Xn-k.m(X) to obtain the code polynomial

36
Cyclic Codes - Example

The (7,4) Hamming Code

For message sequence 1001

The message polynomial, m(X) = 1 + X3 (1001)


1. Multiply by Xn-k (X3) gives X3 + X6

2. Divide by the generator polynomial, g(X) that is a factor of Xn + 1

For the (7,4) Hamming code is defined by its generator polynomials, g(X)
that are factors of X7 + 1

With n =7, we can factorize X7 + 1 into three irreducible polynomials

X7 + 1 = (1 + X)(1 + X2 + X3)(1 + X + X3)

37
Cyclic Codes - Example

For example we choose the generator polynomial, 1 + X + X3 and perform


the division we get the remainder, b(X) as X2 + X (011)

3. Add b(X) with X3 + X6 to obtain the code polynomial, c(X)

c(X) = X + X2 + X3 + X6

So the codeword for message sequence 1001 is 0111001

38
Cyclic Codes – Implementation

The Cyclic code is implemented by the shift-register encoder with (n-k)


stages

rn-k-1

Encoding starts with the feedback switch closed, the output switch in the message
bit position, and the register initialized to the all-zero state.

The k message bits are shifted into the register and delivered to the transmitter.
After k shift cycles, the register contains the b check bits.

The feedback switch is now opened and the output switch is moved to the check
bits to deliver them to the transmitter.
39
Cyclic Codes – Implementation example

The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages

When the input message is 0011, after 4 shift cycles the redundancy bits are delivered

40
Cyclic Codes – Implementation Exercise

The shift-register encoder for the (7,4) Hamming Code has (7-4=3) stages

When the input message is 1001, after 4 shift cycles the redundancy bits are delivered

1 0 0 0 0 1 1
The check bits
0 0 1 1 1 1 0
is 011
0 1 1 0 1 1 1
1 1 1 1 1 1 0

41

You might also like