Introduction To Information Theory Channel Capacity and Models
Introduction To Information Theory Channel Capacity and Models
theory
channel capacity and models
A.J. Han Vinck
University of Essen
October 2002
content
Introduction
Entropy and some related properties
Source coding
Channel coding
Multi-user models
Constraint sequence
Applications to cryptography
This lecture
Some models
Channel capacity
converse
some channel models
transition probabilities
memoryless:
- output at time i depends only on input at time i
- input and output alphabet finite
channel capacity:
X Y
H(X) channel H(X|Y)
notes:
capacity depends on input probabilities
because the transition probabilites are fixed
channel model:
binary symmetric channel
1-p
Error Source
0 0
E
p
X Y X E
+
Output 1 1
Input
1-p
random error
0 0 0 1 0
column wise
1 0 0 1 1
1 1 0 0 1
De-Interleaving: block
read in column 1 0 1 e 1
0 1 e e 0 read out
wise 0 0 e 1 0
in
Example:b = 5, m = 3
out
Class A Middleton channel model
AWGN, 20
I I
AWGN, 2 1
Q Q
AWGN,22
I and Q same variance
Select channel k
0 0
with probability Q(k)
1 1
Transition
probability P(k)
Example: Middletons class A
Pr{ = (k) } = Q(k), k = 0,1,
k2I / A G2 1/ 2 A
e A k
(k) : ( ) Q(k) :
I G
2 2
k!
A is the impulsive index
1 1 1
A = 0.1 A=1 A = 10
Example of parameters
0 0
1 1 1
A = 0.1 A=1 A = 10
channel capacity: the BSC
1 1 H(Y|X) = h(p)
0
1-e
0 I(X;Y) = H(X) H(X|Y)
e
H(X) = h(P0 )
X E Y
H(X|Y) = e h(P0)
e
1 1
1-e
Thus Cerasure = 1 e
P(X=0) = P0 (check!, draw and compare with BSC and Z)
channel models: general diagram
P1|1 y1
x1
P2|1 Input alphabet X = {x1, x2, , xn}
P1|2
x2 P2|2 y2 Output alphabet Y = {y1, y2, , ym}
: Pj|i = PY|X(yj|xi)
:
:
:
: In general:
:
xn calculating capacity needs more
Pm|n theory
ym
clue:
I(X;Y)
is convex in the input probabilities
Definition:
The rate R of a code is the ratio , where
k
n
Code receive
message word in
estimate
2k channel decoder
Code book
n
There are 2k code words of length n
Channel capacity:
sketch of proof for the BSC
Code: 2k binary codewords where p(0) = P(1) =
Channel errors: P(0 1) = P(1 0) = p
i.e. # error sequences 2nh(p)
Decoder: search around received sequence for codeword
with np differences
nh( p )
2
P( 1) (2 k 1) n 0
2
k
for R 1 h ( p)
n
and n
Channel capacity: converse
Pe
k/n
C
Converse: For a discrete memory less channel
channel
Xi Yi
n n n n
I ( X ; Y ) H (Y ) H (Yi | X i ) H (Yi ) H (Yi | X i ) I ( X i ; Yi ) nC
n n n
i 1 i 1 i 1 i 1
k = H(M) = I(M;Yn)+H(M|Yn)
1 C n/k - 1/k Pe
Xn is a function of M Fano
I(Xn;Yn) +1+ k Pe
nC +1+ k Pe
Pe 1 C/R - 1/k
Hence: for large k, and R > C,
the probability of error Pe > 0
Appendix:
Assume:
binary sequence P(0) = 1 P(1) = 1-p
t is the # of 1s in the sequence
Then n , > 0
Weak law of large numbers
Probability ( |t/n p| > ) 0
Consequence: