0% found this document useful (0 votes)
72 views

Lecture01 02 Part1

This document provides information about an information theory course titled TIT 3441 Information Theory. It lists the main reference book and a reference journal paper on information theory. It provides details on the course instructor, grading policy which is based on a final exam, midterm test, and quizzes. The course contents are listed covering topics across 5 weeks including information sources and coding. The document concludes with an introduction to information sources and coding explaining key concepts such as discrete and memoryless sources.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Lecture01 02 Part1

This document provides information about an information theory course titled TIT 3441 Information Theory. It lists the main reference book and a reference journal paper on information theory. It provides details on the course instructor, grading policy which is based on a final exam, midterm test, and quizzes. The course contents are listed covering topics across 5 weeks including information sources and coding. The document concludes with an introduction to information sources and coding explaining key concepts such as discrete and memoryless sources.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

TIT 3441 Information

Theory
Main Reference Book:
Haykin, S. Communication Systems,
4th ed., John Wiley & Sons, 2001.
Reference Journal Paper:
"A mathematical theory of
communication" by Claude
Shannon, Bell System Technical
Journal, 1948.

Course Info.

Nazrul Muhaimin
Grading Policy

Final Exam 60%


Mid Term Test 20%
Quizzes 20% (?% each)

Course Contents

Week 1-2: Information Sources and Source Coding


Week 3-5: Channel Capacity & Coding
Week 6-7: Linear Block Coding
Week 8-9: Cyclic Coding
Week 10-13: Convolutional Coding

Lecture 1
Information Sources and
Sources Coding
(Part 1)

Introduction
The purpose of a communication system is to
transmit information from one point to another with
high efficiency and reliability.
Information theory provides a quantitative
measure of the information contained in message
signals and allows us to determine the capacity of a
communication system to transfer this information
from source to destination.
Through the use of coding, redundancy can be
reduced from message signals so that channels can
be used with improved efficiency.
In addition, systematic redundancy can be
introduced to the transmitted signal so that
channels can be used with improved reliability.

Introduction (1)
Information theory applies the laws of
probability, and mathematics in general, to
study the collection and manipulation of
information.
In the context of communications, information
theory originally known as the
mathematical theory of communication
deals with mathematical modelling and
analysis of a communication system, rather
than with physical sources and physical
channels.

Introduction (2)

Introduction (3)

Introduction (4)

Introduction (5)

Introduction (6)

Information Sources
An information source is an object that produces
an event, the outcome of which is random and in
accordance with some probability distribution.
A practical information source in a communication system
is a device that produces messages, and it can be either
analogue or digital.
Here, we shall deal mainly with the discrete sources, since
the analogue sources can be transformed to discrete
sources through the use of sampling and quantisation
techniques.

A discrete information source is a source that


has only a finite set of symbols as possible outputs.
The set of possible source symbols is called the
source alphabet, and the elements of the set are
called symbols.

Information Sources (1)


Information sources can be classified as
having memory
being memoryless
A source with memory is one for which a current
symbol depends on the previous symbols.
A memoryless source is one for which each
symbol produced is independent of the previous
symbols
i.e. the symbols emitted during successive signalling
intervals are statistically independent.

A source having the properties just described above is


termed a discrete memoryless source, memoryless
in the sense that the symbol emitted at any time is
independent of previous choices.

Uncertainty and
Information

Uncertainty and Information


(1)
Consider the event S = sk, with probability pk. It is clear
that:
if the probability pk=1 and pi= 0 for all ik, then there
is no surprise and therefore no information.
if, the source symbols occur with different
probabilities, and the probability pk is low, then
there is more surprise and therefore more
information when symbol sk is emitted by the
source than when symbol si (where ik) with higher
probability is emitted.

Uncertainty and Information


(2)
Thus, uncertainty, surprise and information
are all related.
Before the event (S = sk) occurs there is an amount of
uncertainty. When the event occurs there is an amount
of surprise. After the occurrence of the event S = s k
there is gain in the amount of information. All three
amounts are obviously the same.

Moreover, the amount of information is


related to the inverse of the probability of
occurrence.

Amount of Information

Amount of Information
(1)

Amount of Information
(2)
So far, the term bit is used as an abbreviation for the
phrase binary digit. Hence, there is ambiguous whether bit
is intended as an abbreviation for binary digit or as a unit of
information measure.
=> it is customary to refer to a binary digit as a binit.
Note that if the probabilities of the two binits are not equal,
one binit conveys more and the other binit conveys less
than 1 bit of information.

I(sk) = log2(1/pk) = -log2(pk) in bits

For example, if the binits 0 and 1 occur with probabilities of


and respectively, then binit 0 conveys an amount of
information equal to log24=2 bits, while the binit 1 conveys
information amounting to log24/3=0.42 bit.

Amount of Information
(3)
Example 1.1
A source emits one of four possible symbols during each
signalling interval. These symbols occur with the
probabilities: po=0.4, p1=0.3, p2=0.2 and p3=0.1. Find
the amount of information gained by observing the
source emitting each of these symbols.
Solution
Let the event S = sk denote the emission of symbol sk by
the source.
Hence, I(sk) = log2(1/pk) bits
I(s0) = log2(1/0.4) = 1.322 bits
I(s1) = log2(1/0.3) = 1.737 bits
I(s2) = log2(1/0.2) = 2.322 bits
I(s3) = log2(1/0.1) = 3.322 bits

Average Information and


Entropy
Messages produced by information sources consist
of sequences of symbols. While the receiver of a
message may interpret the entire message as a
single unit, communication systems often have to
deal with individual symbols.
For example, if we are sending messages in English
language, the user at the receiving end is interested mainly
in words, phrases and sentences, whereas the
communication system has to deal with individual letters or
symbols.

Hence it is desire to know the average information


content per source symbol, known also as
entropy, (H).

Average Information and


Entropy (1)

Average Information and


Entropy (2)
The quantity H is called the entropy of a discrete
memoryless source. It is a measure of the average
information content per source symbol. It may be
noted that the entropy H depends on the probabilities
of the symbols in the alphabet of the source.
Example 1.2
Consider a discrete memoryless source with source alphabet
{s0,s1,s2} with probabilities p0=1/4, p1=1/4 and p2=1/2. Find the
entropy of the source.
Solution
The entropy of the given source is
H = p0log2(1/p0) + p1log2(1/p1) + p2log2(1/p2)
= log2(4) + log2(4) + log2(2) = 2/4 + 2/4 + = 3/2
bits

Average Information and


Entropy (3)
Example 1.3
Consider another source X, has an infinitely large set
of outputs with probability of occurrence given by
P(xi)=2-i, i=1,2,3,. What is the average information
or entropy of the source, H(X).
Solution

1
H X p xi log
2 i log 2i
p xi i 1
i 1

i.2 2 bits
i 1

Average Information and


Entropy (4)

Graphs of the
functions
x 1 and log x
versus x.

Average Information and


Entropy (5)
Example 1.4

Average Information and


Entropy (5)
Let us examine H under different cases for K=2:
Case I : p0 0.01, p1 0.99, H 0.08
Case II : p0 0.4, p1 0.6, H 0.97
Case III : p 0.5, p 0.5, H 1
0

In Case I, it is very easy to guess whether the message s0 with a


probability = 0.01 will occur or the message s1 with probability = 0.99
will occur. (Most of the time message s1 will occur). Thus in this case, the
uncertainty is less.
In Case II, it is somewhat difficult to guess whether s0 will occur or s1 will
occur as their probabilities are nearly equal. Thus in this case, the
uncertainty is more.
is difficult
less when
uncertainty
In Case III, it isEntropy
extremely
to guess
whether is
s 0 less.
or s1 will occur, as
Entropy is more when uncertainty is more.
their probabilities
are can
equal.
Thus
in entropy
this case,isthe
uncertainty
Thus, we
say
that
a measure
ofis

You might also like