Lecture01 02 Part1
Lecture01 02 Part1
Theory
Main Reference Book:
Haykin, S. Communication Systems,
4th ed., John Wiley & Sons, 2001.
Reference Journal Paper:
"A mathematical theory of
communication" by Claude
Shannon, Bell System Technical
Journal, 1948.
Course Info.
Nazrul Muhaimin
Grading Policy
Course Contents
Lecture 1
Information Sources and
Sources Coding
(Part 1)
Introduction
The purpose of a communication system is to
transmit information from one point to another with
high efficiency and reliability.
Information theory provides a quantitative
measure of the information contained in message
signals and allows us to determine the capacity of a
communication system to transfer this information
from source to destination.
Through the use of coding, redundancy can be
reduced from message signals so that channels can
be used with improved efficiency.
In addition, systematic redundancy can be
introduced to the transmitted signal so that
channels can be used with improved reliability.
Introduction (1)
Information theory applies the laws of
probability, and mathematics in general, to
study the collection and manipulation of
information.
In the context of communications, information
theory originally known as the
mathematical theory of communication
deals with mathematical modelling and
analysis of a communication system, rather
than with physical sources and physical
channels.
Introduction (2)
Introduction (3)
Introduction (4)
Introduction (5)
Introduction (6)
Information Sources
An information source is an object that produces
an event, the outcome of which is random and in
accordance with some probability distribution.
A practical information source in a communication system
is a device that produces messages, and it can be either
analogue or digital.
Here, we shall deal mainly with the discrete sources, since
the analogue sources can be transformed to discrete
sources through the use of sampling and quantisation
techniques.
Uncertainty and
Information
Amount of Information
Amount of Information
(1)
Amount of Information
(2)
So far, the term bit is used as an abbreviation for the
phrase binary digit. Hence, there is ambiguous whether bit
is intended as an abbreviation for binary digit or as a unit of
information measure.
=> it is customary to refer to a binary digit as a binit.
Note that if the probabilities of the two binits are not equal,
one binit conveys more and the other binit conveys less
than 1 bit of information.
Amount of Information
(3)
Example 1.1
A source emits one of four possible symbols during each
signalling interval. These symbols occur with the
probabilities: po=0.4, p1=0.3, p2=0.2 and p3=0.1. Find
the amount of information gained by observing the
source emitting each of these symbols.
Solution
Let the event S = sk denote the emission of symbol sk by
the source.
Hence, I(sk) = log2(1/pk) bits
I(s0) = log2(1/0.4) = 1.322 bits
I(s1) = log2(1/0.3) = 1.737 bits
I(s2) = log2(1/0.2) = 2.322 bits
I(s3) = log2(1/0.1) = 3.322 bits
1
H X p xi log
2 i log 2i
p xi i 1
i 1
i.2 2 bits
i 1
Graphs of the
functions
x 1 and log x
versus x.