0% found this document useful (0 votes)
61 views

Introduction To Information Theory Channel Capacity and Models

The document introduces several models of communication channels and discusses their capacities, including the binary symmetric channel, burst error channel, and Middleton class A channel. It explains how channel capacity is defined as the maximum mutual information between the input and output and how this relates to the ability to encode and reliably transmit information. Interleaving techniques are also introduced to convert burst errors into random errors to simplify channel modeling.

Uploaded by

文李
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views

Introduction To Information Theory Channel Capacity and Models

The document introduces several models of communication channels and discusses their capacities, including the binary symmetric channel, burst error channel, and Middleton class A channel. It explains how channel capacity is defined as the maximum mutual information between the input and output and how this relates to the ability to encode and reliably transmit information. Interleaving techniques are also introduced to convert burst errors into random errors to simplify channel modeling.

Uploaded by

文李
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 30

Introduction to Information

theory
channel capacity and models
A.J. Han Vinck
University of Essen
October 2002
content
Introduction
Entropy and some related properties
Source coding
Channel coding
Multi-user models
Constraint sequence
Applications to cryptography
This lecture
Some models
Channel capacity
converse
some channel models

Input X P(y|x) output Y

transition probabilities

memoryless:
- output at time i depends only on input at time i
- input and output alphabet finite
channel capacity:

I(X;Y) = H(X) - H(X|Y) = H(Y) H(Y|X) (Shannon 1948)

X Y
H(X) channel H(X|Y)

max I(X; Y) capacity


P( x )

notes:
capacity depends on input probabilities
because the transition probabilites are fixed
channel model:
binary symmetric channel

1-p
Error Source
0 0
E
p
X Y X E
+
Output 1 1
Input
1-p

E is the binary error sequence s.t. P(1) = 1-P(0) = p


X is the binary information sequence
Y is the binary output sequence
burst error model

Random error channel; outputs independent


Error Source P(0) = 1- P(1);

Burst error channel; outputs dependent


P(0 | state = bad ) = P(1|state = bad ) = 1/2;
Error Source
P(0 | state = good ) = 1 - P(1|state = good ) = 0.999

State info: good or bad transition probability


Pgb
Pgg good bad Pbb
Pbg
Interleaving:
bursty

Message interleaver channel interleaver -1 message


encoder decoder

random error

Note: interleaving brings encoding and decoding delay

Homework: compare the block and convolutional interleaving w.r.t. delay


Interleaving: block
Channel models are difficult to derive:
- burst definition ?
- random and burst errors ?
for practical reasons: convert burst into random error

read in row wise 1 0 1 0 1


transmit
0 1 0 0 0

0 0 0 1 0
column wise
1 0 0 1 1

1 1 0 0 1
De-Interleaving: block

read in column 1 0 1 e 1

0 1 e e 0 read out
wise 0 0 e 1 0

this row contains 1 error 1 0 e 1 1


row wise
1 1 e 0 1
Interleaving: convolutional
input sequence 0
input sequence 1 delay of b elements

input sequence m-1 delay of (m-1)b elements

in
Example:b = 5, m = 3

out
Class A Middleton channel model
AWGN, 20

I I
AWGN, 2 1

Q Q
AWGN,22
I and Q same variance
Select channel k
0 0
with probability Q(k)
1 1
Transition
probability P(k)
Example: Middletons class A
Pr{ = (k) } = Q(k), k = 0,1,

k2I / A G2 1/ 2 A
e A k
(k) : ( ) Q(k) :
I G
2 2
k!
A is the impulsive index

2I and G2 are the impulsive and Gaussian noise power


Example of parameters

Middletons class A= 1; E = = 1; I /G = 10-


1.5

k Q(k) p(k) (= transition probability )


0 0.36 0.00
1 0.37 0.16
2 0.19 0.24
3 0.06 0.28
Average p4= 0.124;
0.02Capacity 0.31
(BSC) = 0.457
Example of parameters
0 0

Middletons class A: E = 1; = 1; I /G = 10-3 1 1


Transition
probability P(k)

1 1 1

Q(k) 0.5 Q(k) 0.5 Q(k) 0.5

0.0 0.0 0.0


p(k) 0.5 0.5 0.5
p(k) p(k)

A = 0.1 A=1 A = 10
Example of parameters
0 0

Middletons class A: E = 0.01; = 1; I /G = 10-3 1 1


Transition
probability P(k)

1 1 1

Q(k) 0.5 Q(k) 0.5 Q(k) 0.5

0.0 0.0 0.0


p(k) 0.5 0.5 0.5
p(k) p(k)

A = 0.1 A=1 A = 10
channel capacity: the BSC

1-p I(X;Y) = H(Y) H(Y|X)

0 0 the maximum of H(Y) = 1


X p Y since Y is binary

1 1 H(Y|X) = h(p)

1-p = P(X=0)h(p) + P(X=1)h(p)

Conclusion: the capacity for the BSC CBSC = 1- h(p)


Homework: draw CBSC , what happens for p >
channel capacity: the Z-channel

Application in optical communications

0 0 (light on) H(Y) = h(P0 +p(1- P0 ) )


X p Y
H(Y|X) = (1 - P0 ) h(p)
1-p
1 1 (light off)
For capacity,
P(X=0) = P0 maximize I(X;Y) over P0
channel capacity: the erasure channel

Application: cdma detection

0
1-e
0 I(X;Y) = H(X) H(X|Y)
e
H(X) = h(P0 )
X E Y
H(X|Y) = e h(P0)
e
1 1
1-e
Thus Cerasure = 1 e
P(X=0) = P0 (check!, draw and compare with BSC and Z)
channel models: general diagram

P1|1 y1
x1
P2|1 Input alphabet X = {x1, x2, , xn}
P1|2
x2 P2|2 y2 Output alphabet Y = {y1, y2, , ym}

: Pj|i = PY|X(yj|xi)
:
:
:
: In general:
:
xn calculating capacity needs more
Pm|n theory
ym
clue:

I(X;Y)
is convex in the input probabilities

i.e. finding a maximum is simple


Channel capacity

Definition:
The rate R of a code is the ratio , where
k
n

k is the number of information bits transmitted


in n channel uses

Shannon showed that: :


for R C
encoding methods exist
with decoding error probability 0
System design
Code book

Code receive
message word in
estimate
2k channel decoder

Code book

n
There are 2k code words of length n
Channel capacity:
sketch of proof for the BSC
Code: 2k binary codewords where p(0) = P(1) =
Channel errors: P(0 1) = P(1 0) = p
i.e. # error sequences 2nh(p)
Decoder: search around received sequence for codeword
with np differences

space of 2n binary sequences


Channel capacity:
decoding error probability
1. For t errors: |t/n-p|>
0 for n
(law of large numbers)

2. > 1 code word in region


(codewords random)

nh( p )
2
P( 1) (2 k 1) n 0
2
k
for R 1 h ( p)
n
and n
Channel capacity: converse

For R > C the decoding error probability > 0

Pe

k/n
C
Converse: For a discrete memory less channel

channel

Xi Yi
n n n n
I ( X ; Y ) H (Y ) H (Yi | X i ) H (Yi ) H (Yi | X i ) I ( X i ; Yi ) nC
n n n

i 1 i 1 i 1 i 1

Source generates one


source encoder channel decoder
out of 2k equiprobable
m Xn Yn m
messages

Let Pe = probability that m m


converse R := k/n

k = H(M) = I(M;Yn)+H(M|Yn)
1 C n/k - 1/k Pe
Xn is a function of M Fano

I(Xn;Yn) +1+ k Pe
nC +1+ k Pe

Pe 1 C/R - 1/k
Hence: for large k, and R > C,
the probability of error Pe > 0
Appendix:
Assume:
binary sequence P(0) = 1 P(1) = 1-p
t is the # of 1s in the sequence
Then n , > 0
Weak law of large numbers
Probability ( |t/n p| > ) 0

i.e. we expect with high probability pn 1s


Appendix:

Consequence:

1. n(p- ) < t < n(p + ) with high probability


n ( p ) n n
2. log 2 log 2 (2n ) log 2 2n log 2 2nh( p)
n ( p ) t pn
1 1
3. log 2 2n log 2 2 nh( p ) h (p)
n n
4. A sequence in this set has probability 2 nh( p )

You might also like