Ranjan Bose Information Theory Coding and Cryptography Solution Manual PDF
Ranjan Bose Information Theory Coding and Cryptography Solution Manual PDF
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q.1.1 DMS with source probabilities : {0.30 0.25 0.20 0.15 0.10}
1
Entropy H(X) = ∑ pi log
i pi
= 0.30 log 1/0.30 + 0.25 log 1/0.25 + ……………
= 2.228 bits
qi
Q.1.2 Define D(p ⎜⎜q) = ∑p i
i log
pi
(1)
qi ⎛ qi ⎞
D(p ⎜⎜q) = ∑p i log ≤ ∑P⎜⎜ − 1⎟⎟ [using identity ln x ≤ x – 1]
i
i pi i ⎝ pi ⎠
= ∑ (qi − pi ) = 0
i
∴ D(p ⎜⎜q) ≥ 0
∑ p log p
i i + log n = − H ( X ) + log n ≥ 0
= i
− H ( X ) ≤ log n
H(X) = log n for uniform probability distribution. Hence proved that entropy of a
discrete source is maximum when output symbols are equally probable. The
quantity D(p ⎜⎜q) is called the Kullback-Leibler Distance.
1
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 2
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
2
y = x -1
1
y = ln(x)
0
-1
-2
-3
0 0.5 1 1.5 2 2.5 3 3.5
Q 1.4 Consider two probability distributions: {p0, p1, , pK-1} and {q0, q1, , qK-1}.
K −1
⎛q ⎞ 1 K −1 ⎛q ⎞
We have ∑ pk log 2 ⎜⎜ k ⎟⎟ = ∑ pk ln⎜⎜ k ⎟⎟ . Use ln x ≤ 1 – x,
k =0 ⎝ pk ⎠ ln 2 k =0 ⎝ pk ⎠
K −1
⎛q ⎞ 1 K −1 ⎛ qk ⎞
∑ pk log 2 ⎜⎜ k ⎟⎟ ≤ ∑ pk ⎜⎜ − 1⎟⎟
k =0 ⎝ pk ⎠ ln 2 k =0 ⎝ pk ⎠
1 K −1
≤ ∑ (qk − pk )
ln 2 k =0
1 ⎛ K −1 K −1
⎞
≤ ⎜ ∑ qk − ∑ pk ⎟ = 0
ln 2 ⎝ k =0 k =0 ⎠
K −1
⎛q ⎞
Thus, ∑p k log 2 ⎜⎜ k ⎟⎟ ≤ 0 . (1)
k =0 ⎝ pk ⎠
n m P ( xi , y j )
Now, I(X; Y) = ∑∑ P( xi , y j )log (2)
i =1 j =1 P( xi ) P( y j )
From (1) and (2) we can conclude (after basic manipulations) that I(X;Y) ≥ 0. The
equality holds if and only if P ( xi , x j ) = P( xi ) P( x j ) , i.e., when the input and output
symbols of the channel are statistically independent.
2
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 3
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q.1.5 Source X has infinitely large set of outputs P(xi) = 2-i, i = 1, 2, 3, ………
∞ ∞
1
H ( X ) = ∑ p ( xi ) log = ∑ 2 −i log 2 −i
i =1 p ( xi ) i =1
∞
= ∑ i.2
i =1
−i
= 2 bits
= − ∑ p (1 − p ) i −1 {log p + (i − 1) log (1 − p )}
i
= − p log p ∑ p (1 − p ) i −1 − p log (1 − p ) ∑ (i − 1) (1 − p) i −1
i =1 i
1 1− p
= − p log p × − p log(1 − p) × 2
p p
⎛ 1− p ⎞ p log p − (1 − p) log(1 − p) 1
= − log p − ⎜⎜ ⎟⎟ log(1 − p) = = H ( p) bits
⎝ p ⎠ p p
Q 1.8 Yes it is uniquely decodable code because each symbol is coded uniquely.
Q 1.9 The relative entropy or Kullback Leibler distance between two probability mass
functions p(x) and q(x) is defined as
⎛ p ( x) ⎞
D( p || q ) = ∑ p ( x) log⎜⎜ ⎟⎟ . (1.76)
x∈X ⎝ q ( x) ⎠
(i) Show that D ( p || q ) is non negative.
⎛ p ( x) ⎞ ⎛ q ( x) ⎞
Solution: − D( p || q ) = − ∑ p ( x) log⎜⎜ ⎟⎟ = ∑ p ( x) log⎜⎜ ⎟⎟
x∈X ⎝ q( x) ⎠ x∈X ⎝ p ( x) ⎠
q( x)
≤ log ∑ p ( x) (from Jensen’s Inequality: Ef(X)≥ f(EX))
x∈X p ( x)
= log ∑ q( x) = log(1) = 0.
x∈X
Thus, − D( p || q ) ≤ 0 or D( p || q ) ≥ 0 .
3
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 4
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
p( x)
(ii) D( p || q) = ∑ p( x) log q( x)
Symmetry Property:
D( p || q) = D(q || p )
p( x) q( x)
∑ p( x) log q( x) = ∑ q( x) log p( x)
∑ p( x) log p( x) − ∑ p( x) log q( x) ≠ ∑ q( x) log q( x) − ∑ q( x) log p( x)
Therefore Kullback Leibler distance does not follow symmetry property.
Triangle Inequality:
D( p || q) + D(q || r ) ≥ D( p || r )
p( x) q( x) p ( x)
∑ p( x) log q( x) + ∑ q( x) log r ( x) ≥ ∑ p( x) log r ( x)
On solving this we get
q( x)
∑ (− p( x) + q( x)) log r ( x) ≥ 0
This relation does not hold if p( x) > q( x).
Therefore Kullback Leibler distance does not follow triangle inequality property.
(iii) I ( X ; Y ) = ∑ p ( x, y ) I ( x; y )
log( x, y )
= ∑ p ( x, y )
log p ( x) p( y )
= D( p( x, y ) || p( x) p ( y ))
Q.1.10
4
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 5
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q.1.11 The codeword lengths are possible if and only if they satisfy Kraft-McMillan
inequality, which in this case is
1 1 1 d
3
+ 3 + 3 + 8 ≤1
2 2 2 2
d 5
≤
256 8
d ≤ 160
5
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 6
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q. 1.12 First note that it is a discrete random variable with a valid probability
∞
1
distribution, since ∑ Pn = ∑ 2
= 1.
n n = 2 An log n
⎧ a −1 0≤ x≤a
Q.1.13 P ( x) = ⎨
⎩ 0 otherwise
a
Differential entropy = − ∫ p ( x) log p ( x) dx
0
a
1
= − ∫ log a dx = log 2 a
0
a
The plot is given below. Note that the differential entropy can be negative.
H(X)
3
1 log a
2
-1
-2
-3
0 2 4 6 8 10
Q.1.14 DMS with source probabilities {0.35, 0.25, 0.20, 0.15, 0.05}
6
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 7
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
(ii) R = ∑p l
i
i i = 0.35 × 2 + 0.25 × 2 + 0.20 × 2 + 0.15 × 3 + 0.05 × 3 = 2.2
bits.
H (X ) 1
(iii) η=
R
H (X ) = ∑p i log
pi
= 2.121
2.121
η= = 0.964 = 96.4% .
2.2
Q.1.15 (i)
2 1.00
S1 0.35 Codes
3 S1 2
S1 0.25 S2 3
1 S3 11
S1 0.20 S4 12
2 0.40 S5 13
S1 0.15
3
S1 0.05
(ii) R = ∑p l
i
i i = 0.35 + 0.25 + 0.20 × 2 + 0.15 × 2 + 0.05 × 2 = 1.40 ternary
digits/ symbol
7
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 8
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
L
Average code length = R = ∑ n( xk ) P( xk ) = 3
k =1
L
R = ∑ n( xk ) P( xk ) = 2.9 bits.
k =1
n
(iii) The entropy H(X) = − ∑ P( xi )logP( xi ) = 2.8464 bits
i =1
(ii)
8
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 9
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
9
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and10
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Encoded stream
00000 00001 00010 00101 01001 00111 00011 00110 01100 00100
10101 10100 01000 00110
10
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and12
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q.1.22 (i) For run length code we encode a run in terms of [how many] and [what].
Therefore to encode a run of n bits we require = ⎡log 2 n ⎤ + 1 bits.
The run length code is therefore
n/2n-1
n-1/2n-1
0
1/24 3/16
1
1/23 15/16
0
1/22 3/4
1
1/2
Q.1.23 For example, let us assume a run of 214 1’s. For first level of run length coding we
require 14+1=15 1’s. For second level of run length coding we need 4+1=5 1’s. This
multi-layer run length coding is useful when we have large runs.
To encode n 1’s maximum possible compression can be calculated as:
For first run required bits = ⎡log 2 n ⎤ + 1
For second run required bits = ⎡log ⎡log 2 n ⎤ + 1⎤ + 1
12
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and13
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
13
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 1
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
p( y = 0 x = 0) p( x = 0) (1 − p) p 0
p(x = 0 ⎜y = 0) = =
p( y = 0) (1 − p ) p0 + q(1 − p0 )
p( y = 1 x = 1) p( x = 1) (1 − q) (1 − p 0 )
p(x = 1 ⎜y = 1) = =
p( y = 1) pp0 + (1 − q) (1 − p0 )
Q 2.2
1–p
p0 0 0
p
e
q
p1 1 1
1–q
p(y = 0) = (1 - p) p0
p(y = e) = p p0 + q(1- p0)
p(y = 1) = (1- q) (1-p0)
2 3 P ( y j | xi )
I (x; y) = ∑∑ P( y j | xi ) P(x j )log
i =1 j =1 P( y j )
1 p 1 q
= (1 − p) p 0 log + pp 0 log + (1 − q )(1 − p 0 ) log + q(1 − p 0 ) log
p0 pp 0 q(1 − p 0 ) (1 − p0 ) pp 0 + q(1 − p 0 )
1
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 2
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
dI ( X ; Y )
Hint: For capacity find value of p0 by = 0 and then substitute it in above
dp 0
equation.
2 1 1 1 ⎛ 1 ⎞
I ( x, y ) = p log + p log + (1 − p) log ⎜⎜ ⎟⎟
3 p 3 p ⎝ 1 − p ⎠
1 ⎛ 1 ⎞
= p log + (1 − p) log ⎜⎜ ⎟⎟ = − p log p − (1 − p) log(1 − p)
p ⎝ 1 − p ⎠
dI ( x, y )
= − log p − 1 + log(1 − p) + 1 = 0
dp
⇒ - log p = log (1 - p) ⇒ p = 1/2
∴ CA = ½ log 2 + ½ log 2 = log 2 = 1 bit/use.
1 ⎛ 1 ⎞
(b) I(x; y) = p 2 log + p1 log ⎜⎜ ⎟⎟
p2 ⎝ 1− p2 ⎠
⎛ 1 ⎞
(1 − p1 − p 2 ) log ⎜⎜ ⎟
⎟
⎝ 1 − p2 ⎠
1 1
= p 2 log + (1 − (1 − p 2 ) log
p2 1− p2
(c)
2
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 3
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
p p ⎛ p⎞ ⎛ p⎞
H (Z ) = − log − ⎜1 − ⎟ log⎜1 − ⎟
3 3 ⎝ 3⎠ ⎝ 3⎠
p 1 2p 2
H ( Z | X ) = − log − log
3 3 3 3
I(X; Z) = H(Z) – H(Z|X)
1 1 2 ⎛ 2 ⎞ ⎛ 3 ⎞
I(X; Z) = p log + p log ⎜⎜ ⎟⎟ + (1 − p ) log ⎜⎜ ⎟⎟
3 p 3 ⎝ 3− p ⎠ ⎝3− p⎠
−p 2p 2p
= log p + log 2 − log p (3 − p) + (1 − p) log 3 − (1 − p ) log(3 − p )
3 3 3
−p 2p 2p ⎛ p⎞
= log p + log 2 − + (1 − p) log 3 − ⎜1 − ⎟ log (3 − p)
3 3 3 ⎝ 3⎠
dI ( x; z ) − 1 1 2 1 1
= log p − + − log 3 + log (3 − p ) + = 0
dp 3 3 3 3 3
2 1 ⎛3− p⎞
⇒ − log 3 + log ⎜⎜ ⎟⎟ = 0
3 3 ⎝ p ⎠
⎛3− p⎞
⇒ 2 − 3 log 3 + log ⎜⎜ ⎟⎟ = 0
⎝ p ⎠
⎛3− p⎞
⇒ log⎜⎜ ⎟⎟ = 3 log 3 − 2 = 2.755
⎝ p ⎠
3− p
⇒ = 6.75 ⇒ p = 0.387.
p
(d) CAB is less than both CA and CB. This is because we don’t have the flexibility to
choose the input symbol probabilities for the second half of the composite channel. The
plot of I(X; Z) versus probability p is given below.
3
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 4
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
0.2
0.15
I(X;Z)
0.1
0.05
p
0
0 0.2 0.4 0.6 0.8 1
Q.2.4
C = max {I ( x ; y )}
pi
⎡ p( y i x j ) ⎤
I ( x ; y) = ∑∑ p( x ) p( y i x j ) log ⎢
j ⎥
j i ⎣⎢ p( y i ) ⎦⎥
0.5 0.5
= 0.5 p1 log + 0.5 (1 − p1 − p 2 − p3 ) log
0.5 (1 − p 2 − p3 ) 0.5 (1 − p 2 − p3 )
0.5 0.5
+ 0.5 p1 log + 0.5 p 2 log
0.5 ( p1 + p 2 ) 0.5 ( p1 + p 2 )
0.5 0.5
+ 0.5 p 2 log + 0.5 p3 log
0.5 ( p 2 + p3 ) 0.5 ( p 2 + p3 )
0.5 0.5
+ 0.5 p3 log + 0.5 (1 − p1 − p 2 − p3 ) log
0.5 (1 − p1 − p 2 ) 0.5 (1 − p1 − p 2 )
= - 0.5 [p1 log (1 - p2 - p3) + (1 - p1 – p2 - p3) log (1 - p2 - p3) + p1 log (p1 + p2)
+ p2 log (p1 + p2) + p2 log (p2 + p3) + p3 log (p2 + p3) + p3 log (1 - p1 - p2)
+ (1 - p1 - p2 - p3) log (1 - p1 - p2) ]
= - 0.5 [ (1 - p2 - p3) log (1 - p2 – p3) + (p1 + p2) log (p1 + p2) + (p2 + p3)
log (p2 + p3) + (1 - p1 - p2) log (1 - p1 - p2) ]
4
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 5
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
∂I ( x; y ) 1
= [log( p1 + p 2 ) + 1 − log (1 − p1 − p 2 ) − 1] = 0
∂p1 2
1
⇒ log( p1 + p 2 ) = log (1 − p1 − p 2 ) ⇒ p1 + p 2 = (1)
2
∂I ( x; y )
= − 0.5 [− log(1 − p 2 − p3 ) − 1 + log ( p 2 + p3 ) + 1] = 0
∂p3
⇒ log( p 2 + p3 ) = log (1 − p 2 − p3 )
1
⇒ p 2 + p3 = 1 − p 2 − p3 ⇒ p 2 + p3 = (2)
2
Substituting (1) & (2) in I(x; y) we get
C = -0.5 [0.5 log 0.5 + 0.5 log 0.5 + 0.5 log 0.5 + 0.5 log 0.5]
= - 4/2 × ½ log ½ = log 2 = 1 bit/use.
35
30
Capacity
25
20
15
15 20 25 30 35 40
S NR (dB )
Q.2.6
5
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 6
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Capacity = max {I ( x ; y )}
{ pi }
⎧ 1− p ⎫ ⎧ p ⎫
I ( x ; y) = p1 (1 − p) log ⎨ ⎬ + p(1 − p1 − p2 ) log⎨ ⎬
⎩ (1 − p) p1 + p(1 − p1 − p2 ) ⎭ ⎩ (1 − p) p1 + p(1 − p1 − p2 ) ⎭
⎧ 1− p ⎫ ⎧ p ⎫
+ p2 (1 − p) log⎨ ⎬ + pp1 log⎨ ⎬
⎩ pp1 + p(1 − p − p2 ) ⎭ ⎩ pp1 + p(1 − p) p2 ) ⎭
⎧ 1− p ⎫ ⎧ p ⎫
+ (1 − p1 − p 2 ) (1 − p ) log ⎨ ⎬ + pp 2 log ⎨ ⎬
⎩ pp 2 + (1 − p )(1 − p1 − p 2 ) ⎭ ⎩ pp 2 + (1 − p )(1 − p1 − p 2 ) ⎭
∂I ( x; y )
=0 ⇒ (2p-1) log [(1-p)p1 + p(1-p1-p2)] – plog [pp1 + p2(1-p)]
∂p1
+ (1-p) log[pp2 + (1-p) (1-p1-p2)] = 0
∂I ( x; y )
=0 ⇒ p log [(1-p)p1 + p(1-p1-p2)] – (1-p)log [pp1 + (1-p)p2]
∂p 2
- (2p-1) log [pp2 + (1-p) (1-p1-p2)] = 0
96 ×10 6
Bandwidth required = = 11.55 ×10 6 Hz
log 2 (1 + 316.22)
Q.2.8
6
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 7
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
1 p
(a) I ( x; y ) = p1 log + p(1 − p1 ) log
p1 + p(1 − p1 ) p1 + p(1 − p1 )
p
+ (1 − p) (1 − p1 ) log
(1 − p) (1 − p1 )
dI ( x; y )
= - (1-p) log [p1 + p(1-p1)] – (1-p) – p log p + (1 - p) [1 + log (1 - p1)]
dp1
= - (1 - p) log (1 - p1) - (1-p) log [p1 + p (1 - p1) – p log p
⎡ 1 − p1 ⎤ p
⇒ log ⎢ ⎥= log p
⎣ p1 + p (1 − p1 ) ⎦ 1 − p
p
1 − p1
⇒ = ( P) 1− p = α
p1 + p (1 − p1 )
1−α p
p1 = → Input probability that results in capacity
1 − α p + pα
(b)
p1 = 1, 1 …. N times = 1
p2 = p + (1 - p)p + (1-p)2.p + ….. + (1-p)N-1 p
⎡1 − (1 − p ) N ⎤
=⎢ ⎥ = 1 − (1 − p )
N
⎣ 1 − (1 − p ) ⎦
p3 = (1 - p)N
7
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 8
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
(ii) Pe < 2 n ( R0 − Rc )
Assuming binary antipodal signaling-
10-6 < 2 –(0.99-Rc)
Rc < 18.94
∞
p( x)
Q.2.10 Consider D ( p q) ∆ ∫ p( x) log q( x) dx
−∞
Where p(x) and q(x) are two probability distributions.
∞ ∞
q( x) ⎛ q( x) ⎞
− D ( p q) ∆ ∫ p( x) log dx ≤ ∫ p( x) ⎜⎜ − 1⎟⎟dx
−∞
p ( x) −∞ ⎝ p ( x) ⎠
∞ ∞
= ∫ q( x) dx − ∫ p( x) dx = 0
−∞ −∞
⇒ D ( p q) ≥ 0
x2
1 −
2σ
Now, let q(x) be the Gaussian distribution : e
2πσ 2
Let p(x) → arbitrary distribution with variance σ2.
8
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 9
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
∞
p( x)
D ( p q) ∆ ∫ p( x) log q( x) dx
−∞
∞ ∞
⎛ 1 ⎞
= ∫
−∞
p ( x ) log ⎜
2 ⎜ ⎟
⎟
⎝ q( x) ⎠
dx + ∫ p( x) log 2 p( x) dx
−∞
∞ ⎛ x2 ⎞
= ∫ p ( x) log 2 ⎜ 2πσ 2 e 2σ ⎟ dx − H ( X )
1 2
2 ⎜ ⎟
−∞ ⎝ ⎠
Differential entropy for differential entropy
Gaussian distribution
1 1
= log (2πσ 2 ) + log 3 − H ( X )
2 2
1
= log 2 (2π eσ 2 ) − H ( X )
2
1
D ( p q) ⇒ H ( X ) ≤ log 2 (2π eσ 2 )
2
Differential entropy for Gaussian distribution.
9
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 1
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡ d − 1⎤ ⎡ 6 − 1⎤
t=⎢ ⎥ = ⎢ ⎥ =2
⎣ 2 ⎦ ⎣ 2 ⎦
⎡ ⎛n⎞ ⎛n⎞ ⎛ n ⎞⎤
⎢1 + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ + ........ + ⎜⎜ ⎟⎟ ⎥ ≤ 2 n − k
⎣ ⎝1⎠ ⎝2⎠ ⎝ t ⎠⎦
⎛6⎞ ⎛6⎞
⇒ 1 + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ ≤ 2 5
⎝1⎠ ⎝2⎠
⇒ 22 < 32.
Hence (6, 1, 6) is a valid code word and it is simply the repetition code
with n = 6.
(ii) (3, 3, 1) ⇒ d* = 1 ⇒ t = 0.
By Hamming bound [1 + 0] ≤ 2º ≤ 1 = 1.
(iii) (4, 3, 2) ⇒ d* = 2 ⇒ t = 0.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 2
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡1 0 1 0 0⎤
Q3.3 G = ⎢⎢ 1 0 0 1 1 ⎥⎥
⎢⎣ 0 1 0 1 0 ⎥⎦ 3 x 5
⇒ k = 3, n = 5.
C=I.G
⎡0 0 0 ⎤
⎢0 0 1 ⎥
⎢ ⎥
⎢0 1 0 ⎥
⎢ ⎥ ⎡1 0 1 0 0⎤
0 1 1 ⎢1 0 0 1 1⎥
C= ⎢ ⎥
⎢ ⎥
⎢1 0 0 ⎥
⎢ ⎥ ⎢⎣ 0 1 0 1 0 ⎥⎦
⎢1 0 1 ⎥
⎢1 1 0 ⎥
⎢ ⎥
⎢⎣ 1 1 1 ⎥⎦
⎡ 0 0 0 0 0 ⎤
⎢ 0 1 0 1 0 ⎥
⎢ ⎥
⎢ 1 0 0 1 1 ⎥
⎢ ⎥
1 1 0 0 1
C ⎢
= ⎥
⎢ 1 0 1 0 0 ⎥
⎢ ⎥
⎢ 1 1 1 1 0 ⎥
⎢ 0 0 1 1 1 ⎥
⎢ ⎥
⎢⎣ 0 1 1 0 1 ⎥⎦
⎡1 1 0 1 0 ⎤
H= ⎢ ⎥
⎣0 1 1 0 1 ⎦
⎡1 0 ⎤
⎢1 1 ⎥
⎢ ⎥
HT = ⎢0 1 ⎥
⎢ ⎥
⎢1 0 ⎥
⎢⎣ 0 1 ⎥⎦
⎡0 0 0 0 0 ⎤ ⎡0 0 ⎤
⎢0 ⎥ ⎡1 0 ⎤ ⎢0 1 ⎥
⎢ 0 0 0 1 ⎥ ⎢1 ⎥ ⎢ ⎥
⎢ 1 ⎥
⎢0 0 0 1 0 ⎥ ⎢1 0 ⎥
Syndrome = eH T = ⎢ ⎥ ⎢0 1 ⎥ = ⎢ ⎥
⎢0 0 1 0 0 ⎥ ⎢ ⎥ ⎢0 1 ⎥
⎢0 1 0 0 0 ⎥ ⎢1 0 ⎥ ⎢1 1 ⎥
⎢ ⎥ ⎢⎣ 0 1 ⎥⎦ ⎢ ⎥
⎣⎢1 0 0 0 0 ⎦⎥ ⎣⎢1 0 ⎦⎥
00000 → 00
00100 → 01
01000 → 11
10000 → 10
Standard Array
00001
00010
00100
01000
d*−1
10000it can correct =
(viii) Number of errors =0
2
(ix) Since it cannot correct any error. The symbol error is same as uncoded
probability of error. However, since 1 bit error can be detected, a request
for repeat transmission can be made on this basis. This scheme will also
fail if 2 or more bits are in error. Thus, if automatic repeat request (ARQ)
is used, the probability of error will be
(x) Since all zero is a valid codeword and sum of any two code words is also a
valid codeword, it is a linear code.
⎡1 0 1 0 1⎤
G= ⎢ ⎥
⎣0 1 0 1 0⎦
Q3.5 Given: (n, k, d*) is a binary code with d* even. We need to show that there exists
a (n, k, d*) code in which all codewords have even weight.
Construct a code with all codewords of even weight by adding even a parity bit to
all the codewords of the given binary (n, k, d*) code with d* even. By adding an
even parity, all code words will become even weight and also since d* is even.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 5
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
This process of adding a parity bit will not alter the minimum weight. Hence, the
d* will be even for this new code.
d*−1
t= = 3 , errors that can be corrected.
2
p = 0.01.
⎛ 23 ⎞ 4 ⎛ 23 ⎞ 5 ⎛ 23 ⎞ 23
P(e) = ⎜⎜ ⎟⎟ p (1 − p)19 + ⎜⎜ ⎟⎟ p (1 − p)18 + .......... + ⎜⎜ ⎟⎟ p
⎝ 4 ⎠ ⎝ 5 ⎠ ⎝ 23 ⎠
⎛ 23 ⎞ ⎛ 23 ⎞ ⎛ 23 ⎞ 2 ⎛ 23 ⎞
= 1 − ⎜⎜ ⎟⎟ (1 − p ) 23 − ⎜⎜ ⎟⎟ p(1 − p) 22 − ⎜⎜ ⎟⎟ p (1 − p ) 21 − ⎜⎜ ⎟⎟ p 3 (1 − p) 20
⎝ 0 ⎠ ⎝ 1 ⎠ ⎝ 2 ⎠ ⎝3⎠
= 1 – 0.99992 ≈ 0.00008.
C1 → H1 ⇒ CHT = 0 ⇒ GHT = 0
C1 = [C : P]
G1 = [G : P]
G1 H 1T = [G : P] H 1T
⎡ | 1⎤
⎢ 1 ⎥
⎢ HT | ⎥
= [G : P] ⎢ ⎥
⎢ | 1⎥
⎢_ _ _ _ _ _⎥
⎢ ⎥
⎣0 0 .... 0 | 1⎦
= [GHT G ⊕ P]
= 0.
H : 0
Hence H1 = : 0
..... : :
1 1 1 1...1
is the parity check matrix of the extended code C, obtained from C by adding an
overall parity bit.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 7
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
+ 0 1 2 3 . 0 1 2 3
0 0 1 2 3 0 0 0 0 0
1 1 0 3 2 1 0 1 2 3
2 2 3 0 1 2 0 2 3 1
3 3 2 1 0 3 0 3 1 2
⎡1 0 0 1 1 ⎤
G = ⎢⎢0 1 0 1 2⎥⎥
⎢⎣0 0 1 1 3⎥⎦
I P
As t = 1
r=0
⎧⎛ n ⎞ ⎛ n ⎞ ⎛n⎞ ⎛n⎞ ⎫
M ⎨⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟(q − 1) + ⎜⎜ ⎟⎟(q − 1) 2 + ... + ⎜⎜ ⎟⎟(q − 1) t ⎬ = q n
⎩⎝ 0 ⎠ ⎝ 1 ⎠ ⎝ 2⎠ ⎝t ⎠ ⎭
⎡ ⎛n⎞ ⎛n⎞ ⎛ n ⎞⎤
⎢1 + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟ + ⎜⎜ ⎟⎟⎥ = 2 n − k
⎣ ⎝1⎠ ⎝2⎠ ⎝ 3 ⎠⎦
if n = 7 [1 + 7 + 21 + 35] = 27-k
64 = 27-k
which can be satisfied for k = 1.
∴binary perfect code with d* = 7 & n = 7 is possible.
If n = 23
1 + 23 + 253 + 1771 = 223-k
2048 = 223-k
⇒ k = 12.
∴binary perfect code with d* = 7 & n = 23 is possible.
k 2m − 1
Q3.14 rH = = m
n 2 −1− m
2m − 1
lt rH = lt m
k →∞ m →∞ 2 − 1 − m
Since 2m terms goes toward ∞, exponentially. Hence we can neglect the effect of
m & 1 both
2m
⇒ lt rH = lt m = 1 .
k →∞ m →∞ 2
Q3.15 Hint: Check for the optimality of codes with a larger value of n, but same k and t,
or with a smaller value of k with the same n and t. Then draw a conclusion.
Q.3.16 For orthogonality every column must be orthogonal to other column i.e. inner
product of 2 column vectors must be 0.
F [ x]
Q4.2 (a) over GF(2) contains {0, 1, x, 1 + x}.
x2 + 1
The addition and multiplication tables are given below:
+ 0 1 x 1+x . 0 1 x 1+x
0 0 1 x 1+x 0 0 0 0 0
1 1 0 x+1 1 1 0 1 x x+1
x x x+1 0 1 x 0 x 1 x+1
1+x 1+x x 1 0 1+x 0 x+1 x+1 0
F [ x]
over GF(2) is not a field because of the absence of a multiplicative
x2 + 1
inverse for 1+x.
F [ x]
Also, over GF(2) is not a field because (x2 +1) in not prime polynomial
x2 + 1
over GF(2).
F [ x]
(b) over GF(3) is a field because (x2 + 1) is prime over GF(3).
x +1
2
F [ x]
over GF(3) contains {0, 1, 2, x, x +1, x + 2, 2x, 2x+1, 2x+2}.
x2 + 1
This is a field because it satisfies all the eight conditions stated in Definition 3.9.
Systematic form of G.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 3
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡1 0 0 0 1 ⎤
⎢0 1 0 0 1 ⎥
G= ⎢ ⎥ H = [1 1 1 1 1]
⎢0 0 1 0 1 ⎥
⎢ ⎥
⎣0 0 0 1 1 ⎦
d* = 2.
⎡1 0 1 0 0 1 1 0 1 1 1 0 0 0 0 ⎤
⎢0 1 0 1 0 0 1 1 0 1 1 1 0 0 0 ⎥⎥
⎢
(a) G = ⎢ 0 0 1 0 1 0 0 1 1 0 1 1 1 0 0 ⎥
⎢ ⎥
⎢0 0 0 1 0 1 0 0 1 1 0 1 1 1 0 ⎥
⎢⎣ 0 0 0 0 1 0 1 0 0 1 1 0 1 1 1 ⎥⎦
Minimum distance, d* = 7.
(b) See at the end of the solution for this question.
⎡1 0 0 0 0 1 0 1 0 0 1 1 0 1 1⎤
⎢0 1 0 0 0 1 1 1 1 0 1 0 1 1 0 ⎥⎥
⎢
G = ⎢0 0 1 0 0 0 1 1 1 1 0 1 0 1 1⎥
⎢ ⎥
⎢0 0 0 1 0 1 0 0 1 1 0 1 1 1 0⎥
⎢⎣ 0 0 0 0 1 0 1 0 0 1 1 0 1 1 1 ⎥⎦
I P
(b) H = [- PT In-k]
⎡ 1 1 0 1 0 1 0 0 0 0 0 0 0 0 0 ⎤
⎢ 0 1 1 0 1 0 1 0 0 0 0 0 0 0 0 ⎥⎥
⎢
⎢ 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 ⎥
⎢ ⎥
⎢ 0 1 1 1 0 0 0 0 1 0 0 0 0 0 0 ⎥
⎢ 0 0 1 1 1 0 0 0 0 1 0 0 0 0 0 ⎥
= ⎢ ⎥
⎢ 1 1 0 0 1 0 0 0 0 0 1 0 0 0 0 ⎥
⎢ 1 0 1 1 0 0 0 0 0 0 0 1 0 0 0 ⎥
⎢ ⎥
⎢ 0 1 0 1 1 0 0 0 0 0 0 0 1 0 0 ⎥
⎢ 1 1 1 1 1 0 0 0 0 0 0 0 0 1 0 ⎥
⎢ ⎥
⎣⎢ 1 0 1 0 1 0 0 0 0 0 0 0 0 0 1 ⎦⎥
n – k = 6, n = 15, ⇒ k = 9
No remainder.
⎡ 1 3 3 0 2 1 0 1 2 1 0 0 0 0 0 ⎤
⎢ 0 1 3 3 0 2 1 0 1 2 1 0 0 0 0 ⎥⎥
⎢
⎢ 0 0 1 3 3 0 2 1 0 1 2 1 0 0 0 ⎥
H = ⎢ ⎥
⎢ 0 0 0 1 3 3 0 2 1 0 1 2 1 0 0 ⎥
⎢ 0 0 0 0 1 3 3 0 2 1 0 1 2 1 0 ⎥
⎢ ⎥
⎣⎢ 0 0 0 0 0 1 3 3 0 2 1 0 1 2 1 ⎦⎥
V=[0 0 0 0 0 01 0 0 1 3 1 0 3 1]
⎡1 0 0 0 0 0 ⎤
⎢3 1 0 0 0 0 ⎥⎥
⎢
⎢3 3 1 0 0 0 ⎥
⎢ ⎥
⎢0 3 3 1 0 0 ⎥
⎢2 0 3 3 1 0 ⎥
⎢ ⎥
⎢1 2 0 3 3 1 ⎥
⎢0 1 2 0 3 3 ⎥
⎢ ⎥
HT = ⎢1 0 1 2 0 3 ⎥
⎢2 1 0 1 2 0 ⎥
⎢ ⎥
⎢1 2 1 0 1 2 ⎥
⎢ ⎥
⎢0 1 2 1 0 1 ⎥
⎢0 0 1 2 1 0 ⎥
⎢ ⎥
⎢0 0 0 1 2 1 ⎥
⎢0 0 0 0 1 2 ⎥
⎢ ⎥
⎢⎣ 0 0 0 0 0 1 ⎥⎦
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 6
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡1 1 1 0 1 0 0 ⎤
G = ⎢⎢ 0 1 1 1 0 1 0 ⎥⎥
⎣⎢ 0 0 1 1 1 0 1 ⎥⎦
⎡1 0 0 1 1 0 1 ⎤
= ⎢⎢ 0 1 0 1 1 1 0 ⎥⎥
⎢⎣ 0 0 1 0 1 1 1 ⎥⎦
⎡1 1 0 1 0 0 0 ⎤
⎢1 1 1 0 1 0 0 ⎥⎥
H1 = ⎢ d min = 4
⎢0 1 1 0 0 1 0 ⎥
⎢ ⎥
⎣1 0 1 0 0 0 1 ⎦
4 −1
t= =1
2
S = eHT
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 7
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡ 0 0 0 0 0 0 0 ⎤ ⎡ 0 0 0 0 ⎤
⎢ 0 0 0 0 0 0 1 ⎥⎥ ⎢ 0 0 0 1 ⎥⎥
⎢ ⎢
⎢ 0 0 0 0 0 1 0 ⎥ ⎢ 0 0 1 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 0 0 0 1 0 0 ⎥ 0 1 0 0 ⎥
⎡ 1 1 0 1 ⎤ ⎢
⎢ 0 0 0 1 0 0 0 ⎥ ⎢ ⎢ 1 0 0 0 ⎥
⎢ ⎥ ⎢ 1 1 1 0 ⎥⎥ ⎢ ⎥
⎢ 0 0 1 0 0 0 0 ⎥
⎢ ⎢ 0 1 1 1 ⎥
0 1 1 1 ⎥ ⎢
⎢ 0 1 0 0 0 0 0 ⎥ ⎢ ⎥ 1 1 1 0 ⎥
= ⎢ ⎥ .⎢ 1 0 0 0 ⎥=⎢ ⎥
⎢ 1 0 0 0 0 0 0 ⎥
⎢ ⎢ 1 1 0 1 ⎥
0 1 0 0 ⎥ ⎢
⎢ 1 1 0 0 0 0 0 ⎥ ⎢ ⎥ 0 0 1 1 ⎥
⎢ ⎥ ⎢ 0 0 1 0 ⎥ ⎢ ⎥
⎢ 0 1 1 0 0 0 0 ⎥ ⎢ ⎢ 1 0 0 1 ⎥
⎢ ⎥ ⎣ 0 0 0 1 ⎥⎦ ⎢ ⎥
⎢ 0 0 1 1 0 0 0 ⎥ ⎢ 1 1 1 1 ⎥
⎢ 0 0 0 1 1 0 0 ⎥ ⎢ 1 1 0 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 0 0 0 1 1 0 ⎥ ⎢ 0 1 1 0 ⎥
⎢⎣ 0 0 0 0 0 1 1 ⎥⎦ ⎢⎣ 0 0 1 1 ⎥⎦
The first matrix contains ALL single errors and adjacent double errors. The
second matrix is the HT. The matrix on the RHS is the syndrome matrix.
Since we can map all the two adjacent possible errors to district syndrome
vectors, we will be able to correct all the two adjacent double error.
Q4.11 Hint: Start from the basic definition of the Fire code.
Q4.12 (i) Assuming this Fire code over GF(2). For a valid generator polynomial of a Fire
code g(x) = (x2t-1-1) p(x) where p(x) does not divide (x2t-1-1).
x6 + x +1) x11 + 1 ( x5 + 1
x11 + x6 + x5
x6 + x5 + 1
x6 + x +1
x5 + x
(ii) A Fire code can correct all burst errors of length t or less. So this code can
correct all burst of length 6 or less.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 8
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q.4.13 g(x) is a codeword that is a burst of length (n-k)+1. Probability of burst error of
1
length (n-k) + 1 can not be detected is ( n =k ) +1 .
2
Therefore g(x) can detect a fraction 1-2-(n-k+1) of all burst pattern of length (n-
k+1).
x x2 x3 x4 x5 x6
1
2 2 3 3
Meggitt Decoder:
x x2 x3 x4 x5 x6
2 2
⎡ 41 − 1⎤
(i) Errors that can be corrected ≤ ⎢
⎣ 2 ⎥⎦
≤ 20.
(ii) Burst errors that can be corrected
⎡1 ⎤
≤ ⎢ (n − k )⎥ ≤ 20.
⎣2 ⎦
However, the actual d* = 6. Thus the Singleton bound is quite loose here. This
g(x), however, is excellent for detecting and correcting burst errors
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 1
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Representation of GF(32):
Choose the primitive polynomial p(x) = x2 + x + 2
Table 1
Exponential Polynomial Ternary Decimal Minimal
Notation Notation Notation Notation Notation
0 0 00 0 1
0
α 1 01 1 x+2
α 1
z 10 3 X2 + x + 2
α2 2z + 1 21 7 x2 + 1
α 3
2z + 2 22 8 x2 + x + 2
4
α 2 02 2 x +1
5 2
α 2z 20 6 x + 2x + 2
α 6
Z + 2 12 5 x2 + 1
α 7
Z+1 11 4 x2 + 2x + 2
Q5.2 (i) Using the minimal polynomials as calculated previously in Table-1 in the
previous solution:
GF(3)
x8 – 1 = (x + 1) (x + 2) (x2 + 1) (x2 + 2x + 2) (x2 + x + 2)
Minimum distance d* = 5
n = 8, n – k = 5 ⇒ k = 3.
Code rate = k/n = 3/8.
Q5.7 Hint: Start with a RS code over GF(qm) for a designed distance d. Then show that
BCH code is a subfield subcode of a Reed Solomon code of the same designed distance.
⎡1 x x2 L x µ −1 ⎤
⎢ ⎥
1 X2 X 22 L X 2µ −1 ⎥
D( x) = det ⎢
⎢M M M ⎥
⎢ 2 µ −1 ⎥
⎢⎣ 1 X µ X µ L X µ ⎥⎦
The determinant can be expanded in terms of elements of the first row multiplied by the
cofactors of these elements of the first row. This gives a polynomial in x of degree µ - 1,
which can be written
D ( x) = d µ −1 x µ −1 + L + d 1 x + d 0
The polynomial D(x) has at most µ - 1 zeros. The coefficient d µ −1 is itself the
determinant of a Vandermonde matrix, and by the induction hypothesis, is nonzero. If for
any i, 2 ≤ i ≤ µ, we set x = Xi, then two rows of the matrix are equal, and D(Xi) = 0. Thus
for each i ≠ 1, Xi is a zero of D(Xi), and because they are all distinct and there are µ - 1 of
them, the polynomial can easily factored:
⎡ µ ⎤
D( x) = d µ −1 ⎢∏ ( x − X i )⎥.
⎣ i =2 ⎦
Therefore the determinant of the original Vandermonde matrix is
⎡ µ ⎤
D( X 1 ) = d µ −1 ⎢∏ ( X 1 − X i )⎥.
⎣ i=2 ⎦
This is non zero because d µ −1 is nonzero and X 1 is deferent from each of the remaining
X i . Hence the determinant of the µ by µ Vendermonde matrix is nonzero, and by
induction, the theorem is true for all µ.
⎡1 1 1 1 ⎤
⎢1 2 3 10 ⎥⎥
⎢
⎢1 2 2 32 10 2 ⎥
Q5.9 (i) Given: H = ⎢ 3 ⎥.
⎢1 2 33 10 3 ⎥
⎢ ⎥
⎢ 8 8⎥
⎣⎢1 2 38 10 ⎦⎥
⎡ 10 ⎤
We observe that sr = ⎢∑ i r ⎥ = 0 for r = 0, 1, 2, 3, ..,8.
⎣ i =1 ⎦ mod11
Thus all the 10 columns of H add up to the zero vector. Hence d* = 10 ⇒ t = 4.
⎡1 1 1 1 ... 1 ⎤
⎢1 2 3 4 ... 10 ⎥
⎢ ⎥
⎢1 2 2 32 4 2 ... 10 2 ⎥
Q5.10 (i) Given H = ⎢ 3 3 3 3⎥
⎢1 2 3 4 ... 10 ⎥
⎢1 2 4 34 4 4 ... 10 4 ⎥
⎢ 5 5 5 5⎥
⎣⎢1 2 3 4 ... 10 ⎦⎥
Use the same approach as the previous question.
(ii) First convert the matrix into H = [ – PT | I ] form. Since the math is over GF(11), and
q = 11 = odd, its simply modulo 11 arithmetic. The generator matrix is given by
G = [ I | P].
(i) There are 24 = 16 states for this encoder. The state transitions and the output is given
in the table below.
0 1 1 0 0 0 1 1 0 0 0
1 1 1 0 0 1 1 1 0 1 1
0 1 1 0 1 0 1 1 0 0 0
1 1 1 0 1 1 1 1 0 1 1
0 1 1 1 0 0 1 1 1 0 1
1 1 1 1 0 1 1 1 1 1 0
0 1 1 1 1 0 1 1 1 0 1
1 1 1 1 1 1 1 1 1 1 0
(iii) dfree = 6.
Q 6.2 (i)
+
111
110
110
010
Q6.3 (i)
+
+
+ +
(iii) d* = 5, dfree = 5
(iv) G(D) = [1 + D D2 + D3 D2 + D3 + D4]
Q6.4 (i) k = 3, n = 4, v = 1 m = 4 + 2 + 2 = 8, R = ¾.
[ ]
k0
v = ∑ max deg g ij ( D) = sum of highest power of gij(D) in each row of the
i =1 j
⎡ D + D2 D2 D + D2 0 ⎤
⎢ ⎥
(ii) G ( D) = ⎢ D 2 D D D2 ⎥
⎢ 0 0 D4 D 2 ⎥⎦
⎣ 3x4
⎡0 0 0 0⎤
G 0 = ⎢⎢ 0 0 0 0 ⎥⎥
⎢⎣ 0 0 0 0 ⎥⎦
⎡1 0 1 0⎤ ⎡1 1 1 0⎤
G1 = ⎢⎢ 0 1 1 0 ⎥⎥ G2 = ⎢⎢ 1 0 0 1 ⎥⎥
⎢⎣ 0 0 0 0 ⎥⎦ ⎢⎣ 0 0 0 1 ⎥⎦
⎡0 0 0 0⎤ ⎡0 0 0 0⎤
G3 = ⎢⎢ 0 0 0 0 ⎥⎥ G4 = ⎢⎢ 0 0 0 0 ⎥⎥
⎢⎣ 0 0 0 0 ⎥⎦ ⎢⎣ 0 0 1 0 ⎥⎦
⎡ D 0 1 D2 D + D2 ⎤
⎢ ⎥
Q6.5 G ( D) = ⎢ D 2 0 0 1+ D 0 ⎥
⎢ 1 0 D2 0 D 2 ⎥⎦
⎣
k0 = 3, n0 = 5, R = 3/5
Q6.6 Hint:
Use:
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 5
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
⎡ P0T −I ...⎤
⎢ T ⎥
⎢ P1 0 P 0
T
−I ⎥
⎢ P2T 0 P T
0 P0T −I ⎥
⎢ ⎥
1
H =⎢ : : : : ⎥ , and
⎢PT 0 T
P 0 PmT− 2 0 ... P0T −I ...⎥
⎢ m m −1
⎥
⎢ ⎥
T T
Pm 0 P m −1 0 ...
⎢ T
...⎥⎦
⎣ P m 0 ...
⎡I P0 0 P1 0 P2 ... 0 Pm 0 0 0 0 ...⎤
⎢0 0 I P0 0 P1 ... 0 Pm −1 0 Pm 0 0 ...⎥⎥
⎢
⎢0 0 0 0 I P0 ... 0 Pm − 2 0 Pm −1 0 Pm ...⎥
G=⎢ ⎥,
⎢: : : : : : : : 0 Pm − 2 0 Pm −1 ...⎥
⎢ : : 0 Pm − 2 ...⎥
⎢ ⎥
⎢⎣ : : ⎥⎦
Q6.8 g1 = D3 + D2 + 1
g2 = D3 + D
g3 = D2 + 1
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 6
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
(iii) 0011011
Q6.9 Hint: Solve the problem similar to problem 6.7. You can break up the problem by
representing each symbol ∈{0, 1, 2} by two bits. Thus, this encoder takes in one ternary
symbol (2 bits) and converts it to 2 ternary symbols (4 bits). The memory unit of this
encoder consists of 4 delay elements. Thus the number of states = 34 = 81. Write a small
computer program to calculate the dmin for this code.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 1
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
1 D D + D2
G(D)=
D2 1+ D 1+ D + D2
QPSK 8-PSK
s1 s2
s3 s1
δ0 δ2 δ1
δ0
s2 δ1 s0 s4 δ3 s0
Es Es
s7
δ 02 = 2Es δ 02 = 0.586Es
δ12 = 4Es δ12 = 2Es
δ 22 = 3.414Es
δ 32 = 4Es
= 10 log (4.586/2)
= 3.6040 dB.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 2
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q7.2
(d 2
free )
/ Es coded
(b) g ∞ = g SNR →∞ = 10 log
(d 2
free /E )
s uncoded
,
= 10 log (6/2)
= 4.7712 dB.
Q7.3
(i)
Present State Input Next State Output
0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 1 0
0 0 0 1 0 1 1 0 0 0 0
0 0 0 1 1 1 1 0 0 1 0
0 0 1 0 0 0 1 0 1 0 1
0 0 1 0 1 0 1 0 1 1 1
0 0 1 1 0 1 0 0 1 0 1
0 0 1 1 1 1 0 0 1 1 1
0 1 0 0 0 0 0 1 0 0 0
0 1 0 0 1 0 0 1 0 1 0
0 1 0 1 0 1 1 1 0 0 0
0 1 0 1 1 1 1 1 0 1 0
0 1 1 0 0 0 1 1 1 0 1
0 1 1 0 1 0 1 1 1 1 1
0 1 1 1 0 1 0 1 1 0 1
0 1 1 1 1 1 0 1 1 1 1
1 0 0 0 0 0 0 1 1 0 0
1 0 0 0 1 0 0 1 1 1 0
1 0 0 1 0 1 1 1 1 0 0
1 0 0 1 1 1 1 1 1 1 0
1 0 1 0 0 0 1 1 0 0 1
1 0 1 0 1 0 1 1 0 1 1
1 0 1 1 0 1 0 1 0 0 1
1 0 1 1 1 1 0 1 0 1 1
1 1 0 0 0 0 0 0 1 0 0
1 1 0 0 1 0 0 0 1 1 0
1 1 0 1 0 1 1 0 1 0 0
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 3
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
1 1 0 1 1 1 1 0 1 1 0
1 1 1 0 0 0 1 0 0 0 1
1 1 1 0 1 0 1 0 0 1 1
1 1 1 1 0 1 0 0 0 0 1
1 1 1 1 1 1 0 0 0 1 1
(ii) There are 8 states in the trellis diagram. Construct it using the table above. The
trellis diagram has parallel paths. In succinct form, the trellis diagram can be written as
S0,S2,S0,S2
S4,S6,S4,S6
S7,S5,S7,S5
S0,S2,S0,S2
S1,S3,S1,S3
S6,S4,S6,S4
S5,S7,S5,S7
S3,S1,S3,S1
N( d 2free ) = 2 x 8 = 16.
S0,S4,S0,S4
S2,S6,S2,S6
S1,S5,S1,S5
S0,S4,S0,S4
S7,S3,S7,S3
S6,S2,S6,S2
S3,S7,S3,S7
S3,S1,S3,S1
A0 A1
∆1 = δ1
(b) If the minimum Euclidean distance between parallel transitions is given by ∆m~ +1 and
the minimum Euclidean distance between non-parallel paths of the trellis is given by
~ ), the free Euclidean distance of the TCM encoder is
dfree( m
~ )].
dfree = min[ ∆m~ +1 , dfree( m
If all the symbols are used with equal frequency, the asymmetric constellation diagram
will result in a smaller value of dfree in general (i.e., poorer performance).
Q7.6 Hint: For the right hand side of the inequality, divide both the numerator and
denominator by K and take lim K → ∞.
Q7.8
a) The number of states depends on the desired BER and the computational
complexity permissible.
b) The internal memory of the encoder will be decided by the number of states in the
trellis.
c) The decision regarding parallel paths can be taken upon calculation of the dfree.
Parallel paths cannot be ruled out right in the beginning. The number of branches
emanating from each node will depend on the constellation size of the modulation
scheme.
d) The decision is between MPSK and MQAM. Beyond M = 16 it is better to use
MQAM (see their BERs in AWGN!).
e) Since AWGN is under consideration, we will use the Ungerboek’s design rules.
Q7.9
Viterbi decoding metric is m(rl, sl) = ln p(rl/sl). The metric of the above form is chosen
because it behaves like a distance measure between the received signal and the signal
associated with the corresponding branch in the trellis. It has an additive property,
namely, that the total metric for a sequence of symbols is the sum of the metric for each
channel input and output pair.
Q7.10.
(a) The direct application of equation 7.55 gives the new d p2 ( L) equal to ten times
the previous d p2 ( L) . However, to obtain such a large value of d p2 ( L) , we need to
construct trellis with a larger number of states.
(b) The new d 2free will be larger resulting in an improvement in the performance of
the new TCM scheme over AWGN channels as well.
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 7
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
Q7.11
(a) The trellis diagram of the encoder:
00 00 00
11 11 11
01 01 01
10 10 10
00 00 00
10 11 10 11 10
11
10 10 10
S3
(b) T ( D) =
(
D3 1 − D + D3 )
(
1 − 2D + D 2 + D 4
.
) D D2
D 1 D2
S0 S2 S1 S0
(c)
D
DLI
S3
DLI D2L
DLI L D2L
S0 S2 S1 S0
DLI
"Copyrighted Material" -"Additional resource material supplied with the book Information, Theory, Coding and 8
Cryptography, Second Edition written by Ranjan Bose & published by McGraw-Hill Education (India) Pvt. Ltd. This
resource material is for Instructor's use only."
L3 ID 3 (1 − LID ) + L4 I 2 D 6
T ( D , L, I ) =
( )
1 − LID − L2 ID + L3 I 2 D 2 + L3 I 2 D 4
.
(d) d Hfree = 3.
Q7.12: Hint: Carry out the mathematical excercise along the lines in Section 7.6.
Q8.1 (a) Let M be the size of alphabet. Then to crack this code we need ‘M-1
attempts’ by using brute force attack. In this case we require 26 – 1 = 25
attempts.
(b) 25 ms
Q8.3. (a) Assuming plain text is staggered between k rows, we apply brute force
attack for each ‘k’, 1<k<n , where ‘n’ is the length of the cipher text.
For each ‘k’, divide cipher text into ⎣n / k ⎦ sized blocks (total k blocks).
Number of attacks = (n - 2).
(b) Given k, apply the attack corresponding to k as given in part a) to decrypt.
(b) Given the cipher text, use the second key to form the second matrix (write
horizontally, read vertically). After reading horizontally, we get intermediate
cipher text. Next, use the first key to form the rectangular matrix. For finding
plain text corresponding to XY location, look at X-row and Y-column.
Q8.6 27!
Q8.7
P = 29, Q = 61
N = PQ = 29 * 61 = 1769
Q(N) = (P-1) (Q-1) = 28 * 60 =1680
(b) R = 82, S = 83 , A = 65
(c) P= 37, Q = 67
N=PQ =37 *67 = 2479
Q(N) = (P-1) (Q-1) = 36 * 66 = 2376
E=5
D = 1901
Public key = 5
Private key = 1901
Q8.8 Hint: Find prime numbers below 1090 and 10100 and subtract from the
later.
Q8.9
(i) Each pair should have different keys so total number of distinct keys
⎛ N ⎞ N ( N − 1)
= N C2 = ⎜⎜ ⎟⎟ = .
⎝2⎠ 2
(ii) There is 1 public key and N private keys.
Q8.10 1/3