0% found this document useful (0 votes)
84 views9 pages

Bit Error Minimization in A Digital Communication Systems Using Theoritical and Exprerimental Probability

The document discusses minimizing bit error rate in digital communication systems using theoretical and experimental probability techniques. It explains that noise can distort transmitted signals, causing inaccurate data transmission. The paper derives a decision rule at the receiver using Bayes' theorem and conditional probability to distinguish the transmitted signal and recover the original information. This decision rule aims to minimize the bit error rate by determining the most likely transmitted signal based on the received signal which is a combination of the transmitted signal and noise. The paper analyzes the transmitted signals, noise signals, possible received signal combinations, and computes the probabilities of each using the theoretical probability distributions to develop the optimal decision rule.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views9 pages

Bit Error Minimization in A Digital Communication Systems Using Theoritical and Exprerimental Probability

The document discusses minimizing bit error rate in digital communication systems using theoretical and experimental probability techniques. It explains that noise can distort transmitted signals, causing inaccurate data transmission. The paper derives a decision rule at the receiver using Bayes' theorem and conditional probability to distinguish the transmitted signal and recover the original information. This decision rule aims to minimize the bit error rate by determining the most likely transmitted signal based on the received signal which is a combination of the transmitted signal and noise. The paper analyzes the transmitted signals, noise signals, possible received signal combinations, and computes the probabilities of each using the theoretical probability distributions to develop the optimal decision rule.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Minimizing Bit Error Rate in a Digital

Communication System Using Theoretical and


Experimental Probability
Julito B. Anora Jr. REE
Department of Electronics and Electrical Engineering
MEE-CSO, University of San Carlos, Cebu City
[email protected] [email protected]

Abstract Data communication is essential in the exchange of


information such as emails, teleconferencing, interactive software
and thousands of things in any fields of science and engineering.
However, in the transmission of data, the communication systems
always are subjected with random noises. These noises carry no
useful information and interferes the information received signal
represents. Since noise is an error or undesired random
disturbance of useful information, means of correcting it should
be considered to preserve the original transmitted signal.

represented. The main purpose of the decision rule is to


minimize the probability of the error, or what is called the Bit
Error Rate( BER).

This paper, therefore, explains the methodologies in


correcting distorted signals in the receiver by means of
theoretical and experimental probability. Using the Bayes
Theorem, the decision rule is derived. The decision rule in the
minimization of Bit Error Rate is the ultimate goal of this paper
as this relates the variables that make up the communication
system: Transmitted Signals, Noise Signals, and the Received
Signals.

In telecommunication transmission, the Bit Error Rate


(BER) is the percentage of bits that have errors relative to the
total number of bits received in the transmission. The BER is
an indication of how often a packet or other data unit has to be
retransmitted because of an error. Too high a BER may
indicate that a slower data rate would actually improve
overall transmission time for a given amount of transmitted
data since the BER might be reduced, lowering the number of
packets that had to be resent.

KeywordsNoise Signals; Bit Error Rate; Communication;


Probability techniques; Systems; Digital

I.

INTRODUCTION

In communication system, noise is a summation of


unwanted or disturbing energy from natural and sometimes
man-made sources. Noise can be associated with deliberate
jamming or other unwanted electromagnetic interference from
transmitters and the unwanted systematic alteration of the
signal waveform by the communication equipment.
When signal is being transmitted from a source or signal
generators, ideally, it is assumed that the bits of information are
received in the receiver side in exactly the same information
and nature (or its equivalent) as it is being transmitted in the
transmitter side. However, because a communication system is
not at all noise-resistant, the transmitted signals are deliberately
affected by the integration of the unwanted signals. This results
to the inaccurate data transmission, and the erratic and noisy
systems.
While this is a problem, it can be remedied by
implementing a decision rule at the receiver side that
distinguishes the signal in correspondence to the transmitted
signals and recover the information it conveyed and

The decision rule is derived using the several techniques in


probability theory, particularly conditional probability and the
Bayes Theorem. This will be verified using the experimental
relative frequency approach.
II.

III.

BIT ERROR RATE

TRANSMITTED SIGNALS, NOISE SIGNALS,


RECEIVED SIGNALS

A. Transmitted Signals
In the analysis of this paper, the repeatedly sent signal S
can be represented by a vector (1 x n ) where n represents the
number of times the signal is being sent. The transmitted
signal S has two values: [+1] or [ -1], and represents sets of
information which is transmitted one at a time. However, to
increase system reliability, the source signal S is transmitted
three times by which S can be represented with values of
either [+1,+1,+1] or [-1,-1,-1].
The probability by which either signal is generated is
equally likely, thus 50% of chances occurring.

B. Noise Signals
The noise signals can be represented by
vectors, in the same manner by which the
transmitted signals are being represented: a (1 x 3)
matrix. The noise signals N have finite values {+2
, +1 , 0 , -1 ,-2} with each element probabilities of
occurring {0.1 , 0.2 , 0.4 , 0.2 , 0.1}, respectively.
Each element in the vector N is independent with
each other, which means that the first noise wont
affect the behavior of the second and the third and
so on. Because of this, the probability of each of
the vector simply is the product of each
probability of elements in the vector a
probability of simultaneous and sequential events.
C. Received Signals
The signal received at the receiving station, X,
is simply the sum of the two vectors, S + N. The
values of the X vector depend upon the values of
the sent signal S and the noise associated in the
system N.
IV.

DECISION RULE

The decision rule implemented in the receiving


station follows some tedious computations and
analysis using conditional probability and Bayes
theorem as described:
a.

b.

c.
d.

The received signals X are derived after


the integration of the additive noise N to
the signal transmitted S.
The noise combinations are sorted and
arranged, determined logically to which
signal source the received signal originates Fig. 1: Signal Source (-1) and the Noise
from.
B. Sorting of Signal Combination
Each received signals probability is computed using
The received signal at the receiving station has formula:
the theory of conditional probability.
Decision rule is derived using the Bayes Theorem.

X= S + N

(1)

A. Signal Combinational Output


The basis of arriving at the decision rule depends upon the
signal output combinations in the receiving station. As
mentioned, the vector elements of X depend upon the elements
of vector N ( and of course with signal source S).

Since vector N can be either values {+2, +1, 0, -1,-2}, the


maximum and minimum values of X may have elements that
comprise the values [+3,+2,+1,0,-1,-2,-3] given the source
signal S [+1 ,-1].

It is discussed that each element of vector N is independent


with each other. Thus to say, that if the first element of N has 5
possible values, the second element also has 5 possible values,
and so the same applies to the 3 rd, 4th and 5th element. By
means of the fundamental principles of counting, the total
number of possible N vectors is 125.

In this case, the possible output combinations at the


receiving stations can be sorted and arranged as to what
received signal output combinations surely originates from
either +1 or -1. This means that in the 250 possible received
signal output combinations, there has to be 100% probability
that it originates from either source signal of +1 or -1.

Since the source signal S, which has two possible values


{+1,-1}, is affected by the elements of the vector N, the total
number of outputs therefore is 250.

Since the received signal X is the combination of both the


vector N and S, the elements of X has to be represented by
[X1, X2, X3] where X1 represents the element from the addition
of vector elements Nk and Sk ( where k = 1). X2 is a vector
combination of N2 and S2 if (k=2,3,4) exists.

With the other signal output combinations that neither


the +3,+2 -3 or -2 appearing as elements of vectors, this can
be isolated and later are considered for in depth analysis.
The output combinations that are picked with 100%
probability will no longer be analyzed and automatically be
part of the decision rule.
In sorting the signal combinations, deriving the decision
rule in the receiving station can be a lot easier to deal with.
C.Computations of Conditional Probability
From the sorted signal combinations, the vectors which
have no +3,+2, -3 or -2 elements appearing will have its
probability computed given the probabilities of each of the
S

Figure 2: Signal (+1) and the Noise

Generally, the Xk represents the general notation of the


vector combination of Nk and Sk. Note however that the S has
two possible values of [+1,-1], and therefore the notation Sk
(where k equals 1,2 ,3) just represent the number of times
the noise is added and does not generally follows that other
signal source values are introduced in the system.
Where X vectors comprise either elements of [+3, +2, X3] ,
[+3, X2 , +2], [ X1, +3, +2] or the signal output combination
that includes +3 and +2, then surely as 100% probability that
the signal is coming from the signal source +1. Given the
finite values of N {+2, +1, 0, -1,-2}, the elements +3 and +2
in the vector could neither come from the signal source -1.
On the other hand, where vectors comprise either elements
of [-3, -2, X3] , [-3, X2 , -2], [ X1, -3, -2] or the signal output
combination that includes -3 and -2, then surely as 100%
probability that the signal is coming from the signal source -1.
Given the elements of N {+2, +1, 0, -1,-2}, the element of -3
and -2 in the vector could neither come from the signal source
+1.

P(S
)
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5

P(N|S)
0.1
0.1
0.1
0.2
0.1
0.1
0.1
0.2
0.2
0.4
0.1
0.1
0.2
0.2
0.2
0.4
0.4
0.1
0.2
0.2
0.4
0.4
0.4
0.2
0.4
0.4
0.4

0.1
0.1
0.2
0.1
0.1
0.2
0.4
0.1
0.2
0.1
0.2
0.4
0.1
0.2
0.4
0.1
0.2
0.4
0.2
0.4
0.1
0.2
0.4
0.4
0.2
0.4
0.4

0.1
0.2
0.1
0.1
0.4
0.2
0.1
0.2
0.1
0.1
0.4
0.2
0.4
0.2
0.1
0.2
0.1
0.4
0.4
0.2
0.4
0.2
0.1
0.4
0.4
0.2
0.4

P(N)

X=(S+N)

0.000
5
0.001
0.001
0.001
0.002
0.002
0.002
0.002
0.002
0.002
0.004
0.004
0.004
0.004
0.004
0.004
0.004
0.008
0.008
0.008
0.008
0.008
0.008
0.016
0.016
0.016
0.032

[1,1,1]
[1,1,0]
[1,0,1]
[0,1,1]
[1,1,-1]
[1,0,0]
[1,-1,1]
[0, 1, 0]
[0,0,1]
[-1, 1, 1]
[1 0 -1]
[1 -1 0]
[0 1 -1]
[0 0 0]
[0 -1 1]
[-1 1 0]
[-1 0 1]
[1 -1 -1]
[0 0 -1]
[0 -1 0]
[-1 1 -1]
[-1 0 0]
[-1 -1 1]
[0 -1 -1]
[-1 0 -1]
[-1 -1 0]
[-1 -1 -1]

corresponding values of N. Fig.1 and Fig.2 shows that the 250


output combinations are sorted down to 54 output
combinations: 27 combinations for signal source +1 and other
27 combinations for signal source -1.Table 1 shows the
tabulation of probabilities of corresponding noise N and its
relation to the received signals for the source signal S ,+1.
The last column represents the received signal combination
after the vector addition of S and N

TABLE I. CONDITIONAL PROBABILITY OF NOISE N GIVEN S

-1

P(S)
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
TABLE II.

P(N|S)

P(N)

Equation (2) describes the relationship of the source


signal S and the received signal X. In order to make
clarity the relations of X and S, there has to be an
assumption that S could either be S+1and S-1, where S+1
constitutes elements [+1,+1,+1] and S-1 [-1,-1,-1],
respectively.

X=(S+N)

0.4
0.4
0.4
0.032 [1 1 1]
0.4
0.4
0.2
0.016 [1 1 0]
0.4
0.2
0.4
0.016 [1 0 1]
0.2
0.4
0.4
0.016 [0 1 1]
0.4
0.4
0.1
0.008 [1 1 -1]
0.4
0.2
0.2
0.008 [1 0 0]
0.4
0.1
0.4
0.008 [1 -1 1]
0.2
0.4
0.2
0.008 [0 1 0]
0.2
0.2
0.4
0.008 [0 0 1]
0.1
0.4
0.4
0.008 [-1 1 1]
0.4
0.2
0.1
0.004 [1 0 -1]
0.4
0.1
0.2
0.004 [1 -1 0]
0.2
0.4
0.1
0.004 [0 1 -1]
0.2
0.2
0.2
0.004 [0 0 0]
0.2
0.1
0.4
0.004 [0 -1 1]
0.1
0.4
0.2
0.004 [-1 1 0]
0.1
0.2
0.4
0.004 [-1 0 1]
0.4
0.1
0.1
0.002 [1 -1 -1]
0.2
0.2
0.1
0.002 [0 0 -1]
0.2
0.1
0.2
0.002 [0 -1 0]
0.1
0.4
0.1
0.002 [-1 1 -1]
0.1
0.2
0.2
0.002 [-1 0 0]
0.1
0.1
0.4
0.002 [-1 -1 1]
0.2
0.1
0.1
0.001 [0 -1 -1]
0.1
0.2
0.1
0.001 [-1 0 -1]
0.1
0.1
0.2
0.001 [-1 -1 0]
0.1
0.1
0.1
0.0005 [-1,-1,-1]
CONDITIONAL PROBABILITY OF NOISE N GIVEN S

Since the receiving station now comprises the 54


output combinations( sorted out from 250
combinations), the probabilities of received signal X
given S, P(X|S), and the probability of the received
signal X, P(X) should still be derived though it is
already known that the probability of S occurring is
50% each of the source signals +1 and -1.
It is described earlier that the vector X depends on
the probability of the signal generated S and the
probability that the noise N possibly occurs. The
probability of a noise P(N) simply is the product of each
of the elements when the signal source is transmitted
three times. Translating this finds the probability of the
signal received in the receiving station, which can be
summarized by the formula.

P ( X ) P ( S1 ) P( N1 ) P( S k ) P( N k )

TABLE III.

Table 2 also shows the tabulation of probabilities of


corresponding noise N and its relation to the received signals
for the source signal S, -1. The last column represents the
received signal combination after the vector addition of S and
N.
D. Bayes Theorem
In axioms of probability, the Bayes Theorem will specify
the chances out of the signal combinations where the signal
originates from.
By Bayess Theorem, if it is desired to compute for
probability of S given X, the general equation is of
great use.

P( S | X )

P( X | S ) P(S )
P( X )

CONDITIONAL PROBABILITY OF X

Equation (3) shows the relations of the probabilities of X at


the receiving station to the transmitted signal S and the
noise N associated in the system.
Table III, tabulates the conditional probability of the
vector X after the random noises N are associated in the
system given the source signal S. The probability values
in the last column of Table III is derived from (3) in
finding the probabilities of vector elements of X. Since
it is known that the transmitted signals S has 50%
chances of occurring, P(S1) and P(S2) , the P(N1) and
P(N2), on the other hand comprises the product of the
noise signal probabilities in correspondence to the
finite values of N.
In Fig.1, with the transmitted signal -1 , the
corresponding noise {0,+1,+2} with probabilities
of{ 0.4,0.2,0.1}, are the noise elements left after the
received signals are sorted and arranged. The vector
elements of X in the 27 signal combinations comprise
the {0,1,-1} and are randomly arranged after the signal
source is transmitted three times. In Fig.2, with the
transmitted signal +1, the corresponding noise {0,-1,-2}
with probabilities of{ 0.4,0.2,0.1}, }, are the noise
elements left after the received signals are sorted and
arranged.

Bayes Theorem, each case of signal output appearing


must equate to 1 when verified.
E. Decision Rule Equation
Basing from the discussions in arriving the decision rule,
Table IV summarizes the probabilities of signal transmitted
[+1,+1,+1] and [-1,-1,-1] given the received signals at the
receiving station. Each of the case of vector X is derived using
the conditional probability and the Bayes Theorem.
From the above discussions, the following conditions are
X=(S+N)
P([1 1 1])

P(S1)P(N1)+P(Sk)P(Nk)

P(X)

0.5*0.1*0.1*0.1+ 0.5*0.4*0.4*0.4

0.0325

P([1 1 0])

0.5*0.1*0.1*0.2+0.5*0.4*0.4*0.2

0.017

P([1 0 1])

0.5*0.4*0.2*0.4+0.5*0.1*0.2*0.2

0.017

P([0 1 1])
P([1 1 -1])

0.5*0.2*0.1*0.1+0.5*0.2*0.4*0.4
0.5*0.4*0.4*0.1+0.5*0.1*0.1*0.4

0.017
0.01

P([1 0 0])

0.5*0.1*0.2*0.2+0.5*0.4*0.2*0.2

0.01

P([1 -1 1])

0.5*0.4*0.1*0.4+0.5*0.1*0.4*0.1

0.01

P([0 1 0])

0.5*0.2*0.1*0.2+0.5*0.2*0.1*0.2

0.01

P([0 0 1])

0.5*0.2*0.2*0.4+0.5*0.2*0.2*0.1

0.01

P([-1 1 1])

0.5*0.4*0.1*0.1+0.5*0.1*0.2*0.4

0.01

As previously discussed, the only elements of X that has


100% probability coming from the signal source -1 are
{-3,-2}. The vector combinations of X that comprises
either the {-3,-2} is no longer included in the analysis.
The elements of X that has 100% probability coming
from the signal source +1 are the values {+3,+2}.

P([1 0 -1])

0.5*0.4*0.2*0.1+0.5*0.1*0.2*0.4

0.008

P([1 -1 0])

0.5*0.1*0.4*0.2+0.5*0.4*0.1*0.2

0.008

P([0 1 -1])

0.5*0.2*0.4*0.1+0.5*0.2*0.1*0.4

0.008

P([0 0 0])

0.5*0.2*0.2*0.2+0.5*0.2*0.2*0.2

0.008

P([0 -1 1])

0.5*0.2*0.1*0.4+0.5*0.2*0.4*0.1

0.008

This analysis is highly considered in arriving a decision


rule since the signal -1 in the given finite values of N
{+2, +1, 0, -1,-2} could be in no way be able to
generate a vector with elements comprising +3, and +2.
Thus, the probability of obtaining these elements is
100% favorable for the signal source +1. The same also
applies when the signal source transmits +1. In the
given finite values of N {+2, +1, 0, -1,-2}, source signal
+1 could be in no way be able to generate a vector with
elements comprising -3, and -2. Thus, the probability of
obtaining these elements is 100% favorable for the
signal source -1.

P([-1 1 0])

0.5*0.4*0.1*0.2+0.5*0.1*0.4*0.2

0.008

P([-1 0 1])

0.5*0.1*0.2*0.4+0.5*0.4*0.2*0.1

0.008

P([1 -1 -1])

0.5*0.4*0.1*0.1+0.5*0.1*0.4*0.4

0.01

P([0 0 -1])

0.5*0.2*0.2*0.4+0.5*0.2*0.2*0.4

0.01

P([0 -1 0])

0.5*0.2*0.4*0.2+0.5*0.2*0.1*0.2

0.01

P([-1 1 -1])

0.5*0.1*0.4*0.1+0.5*0.4*0.1*0.4

0.01

P([-1 0 0])

0.5*0.4*0.2*0.2+0.5*0.1*0.2*0.2

0.01

As can be seen in Table I and Table II, the columns


3,4,5 labeled P(N|S) represent the probability of the
noise given transmitted signal S. Column 6 labeled
P(N) is the product of probability of each of the noise
occurring. Column 7 represents the vector X as the sum
of the vector S and N.
In the analysis of signals received X in the receiving
station, it is assumed that the sum of all probabilities
both from signal +1 and -1 equates to 1. While this is
true, the same may not be correct for the 54 signal
combinations.
The 54 output combinations for +1 and -1 are taken out
from the 250 total numbers of outputs X. Therefore, if
one has to check the sum of all probabilities and
comparing it with the value of 1, all 250 total outputs
should be summed up. However, in the analysis using

P([-1 -1 1])

0.5*0.1*0.1*0.4+0.5*0.4*0.4*0.1

0.01

P([0 -1 -1])

0.5*0.2*0.4*0.4+0.5*0.2*0.1*0.1

0.017

P([-1 0 -1])

0.5*0.1*0.2*0.1+0.5*0.4*0.2*0.4

0.017

P([-1 -1 0])

0.5*0.1*0.1*0.2+0.5*0.4*0.4*0.2

0.017

P([-1 -1 -1])

0.5*0.1*0.1*0.1+0.5*0.4*0.4*0.4

0.0325

derived for the decision rule:


1.
2.
3.
4.

Signal S is +1 when elements of X comprise either,


and/or +3 and +2.
Signal is +1 when the sum of the elements of X is
greater than or equal to zero.
Signal S is -1 when elements of X comprise either,
and/or -3 and -2.
Signal is -1, otherwise, for condition 3.

Translating the conditions above:

Let [X1,X2,X3] be elements of vector X.

S= +1 ,( X 1+ X 2+ X 3) 0
1,otherwise
TABLE IV.

COMPARISOIN OF P(S+1|X) AND P(S-1|X)

P(S+1|X)
P(+1|[1 1 1])
P(+1|[1 1 0])
P(+1|[1 0 1])
P(+1|[0 1 1])
P(+1|[1 1 -1])
P(+1|[1 0 0])
P(+1|[1 -1 1])
P(+1|[0 1 0])
P(+1|[0 0 1])
P(+1|[-1 1 1])
P(+1|[1 0 -1])
P(+1|[1 -1 0])
P(+1|[0 1 -1])
P(+1|[0 0 0])
P(+1|[0 -1 1])
P(+1|[-1 1 0])
P(+1|[-1 0 1])
P(+1|[1 -1 -1])
P(+1|[0 0 -1])
P(+1|[0 -1 0])
P(+1|[-1 1 -1])
P(+1|[-1 0 0])
P(+1|[-1 -1 1])
P(+1|[0 -1 -1])
P(+1|[-1 0 -1])
P(+1|[-1 -1 0])
P(+1|[-1 -1 -1])

TABLE V.

S
1

P(S
)
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0.98462
0.9412
0.9412
0.9412
0.8
0.8
0.8
0.8
0.8
0.8
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.2
0.2
0.2
0.2
0.2
0.2
0.05882
0.05882
0.05882
0.01538

0.01538
0.05882
0.05882
0.05882
0.2
0.2
0.2
0.2
0.2
0.2
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.8
0.8
0.8
0.8
0.8
0.8
0.94118
0.94118
0.94118
0.98462

BIT ERROR RATE COMPUTATIONS

P(N|S)
0.1
0.1
0.1
0.2
0.1
0.1
0.1
0.2
0.2
0.4
0.1
0.1
0.2
0.2
0.2

P(S-1|X)
P(-1|[1 1 1])
P(-1|[1 1 0])
P(-1|[1 0 1])
P(-1|[0 1 1])
P(-1|[1 1 -1])
P(-1|[1 0 0])
P(-1|[1 -1 1])
P(-1|[0 1 0])
P(-1|[0 0 1])
P(-1|[-1 1 1])
P(-1|[1 0 -1])
P(-1|[1 -1 0])
P(-1|[0 1 -1])
P(-1|[0 0 0])
P(-1|[0 -1 1])
P(-1|[-1 1 0])
P(-1|[-1 0 1])
P(-1|[1 -1 -1])
P(-1|[0 0 -1])
P(-1|[0 -1 0])
P(-1|[-1 1 -1])
P(-1|[-1 0 0])
P(-1|[-1 -1 1])
P(-1|[0 -1 -1])
P(-1|[-1 0 -1])
P(-1|[-1 -1 0])
P(-1|[-1 -1 -1])

0.1
0.1
0.2
0.1
0.1
0.2
0.4
0.1
0.2
0.1
0.2
0.4
0.1
0.2
0.4

0.1
0.2
0.1
0.1
0.4
0.2
0.1
0.2
0.1
0.1
0.4
0.2
0.4
0.2
0.1

X=(S+N)

P(N)

DR

[1,1,1]
[1,1,0]
[1,0,1]
[0,1,1]
[1,1,-1]
[1,0,0]
[1,-1,1]
[0, 1, 0]
[0,0,1]
[-1, 1, 1]
[1 0 -1]
[1 -1 0]
[0 1 -1]
[0 0 0]
[0 -1 1]

0.0005
0.001
0.001
0.001
0.002
0.002
0.002
0.002
0.002
0.002
0.004
0.004
0.004
0.004
0.004

1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5

0.4
0.4
0.4
0.2
0.2
0.1
0.1
0.1
0.2
0.1
0.1
0.1

0.1
0.2
0.1
0.2
0.1
0.4
0.2
0.1
0.1
0.2
0.1
0.1

0.2
0.1
0.1
0.1
0.2
0.1
0.2
0.4
0.1
0.1
0.2
0.1

[-1 1 0]
[-1 0 1]
[1 -1 -1]
[0 0 -1]
[0 -1 0]
[-1 1 -1]
[-1 0 0]
[-1 -1 1]
[0 -1 -1]
[-1 0 -1]
[-1 -1 0]
[-1 -1 -1]

BIT ERROR RATE (BER)

0.004
0.004
0.002
0.002
0.002
0.002
0.002
0.002
0.001
0.001
0.001
0.0005

1
1
-1
-1
-1
-1
-1
-1
-1
-1
-1
-1

0.059

Table V summarizes the probability of error that the


communication system incorrectly decodes the transmitted
signal at the receiver side. The decision rule has so far had four
conditions and generally reduces the error of the transmitted
signal in the receiving station.
V.

MATLAB SIMULATION

Finding the least bit of error by applying theoretical


probability can be verified and simulated using MATLAB
software. In utilizing MATLAB, the used method is the
experimental relative frequency approach since the
transmission of signals is done simultaneously.
The process of the MATLAB program starts with the random
generation of signals +1 and -1. Since the randi function in
matlab does not recognizes negative value, the representation
of -1 as if it equates to an integer is considered. Thus, the
transmitted signals, represented by variable Tx, are randomly
generated 1 to represent the signal +1 and 0 to represent the
signal -1, respectively. Tx, therefore, is an array of signals +1
and 0.
There are initial values of Rx stored in an array but will be
later replaced upon the running of the for-loops part of the
program.
While the signals +1 and 0 are being transmitted, the random
noises are introduced in the system with its corresponding
probabilities being considered in the conditional statements.
The resulting values of the Rx are then stored to its assigned
array.
The script of the MATLAB file is discussed in the next
chapter. In Fig. 3, plots the noise vs. probability after the
simulation. The BER value of the program is approximating
the theoretical values of BER=0.059.
VI.

CONCLUSION

The BER value of 0.059 is the least probable error that is


derived from the given condition of transmitted signals sent
three times. With the applications of axioms of probability and

the Bayes Theorem, it is possible to implement a decision rule


at the receiver side. The received signals as the summation of
vectors of noise and the transmitted signals can be sorted out
given conditions of transmitted signals [+1 +1 +1] and [-1 -1
-1] that are of 100% probability. The applications of
probability and the Bayes Theorem are proven to improve the
efficiency of a communication system.
VII.

ACKNOWLEGMENT

This project stimulates the mind of the proponent as this both


expose to the theoretical probability approach and the
experimental relative frequency approach in solving the Bit
Error Rate. Thus, the proponent acknowledges the instructor
of the course: Probability and Random Processes, Engr.
Warren Nunez for giving the proponent an opportunity in
exploring ways of solving this project.

Ill also acknowledge Miss Lili Andrea, Engr. Mark Anthony


Cabilo, Engr. Jaimerson Valleser for the in-depths sharing and
discussions on how to derive the formula and in producing the
valuable output as this project.
VIII.
[1]
[2]

REFERENCES

P. John, Digital Communication, 4th ed..


B. L. Duane Haselman, Mastering MATLAB 6, A
comprehensive Tutorial and Refernce. Upper Saddle, New
Jersey, United States of America: Pearson Education Asia
Pte Ltd, 2002.
[3] G. Amos, MATLAB An Introduction With Applications, G.
G. Printers, Ed. Ohio State, USA: John Wiley and Sons,
2004.
[4]
M. R. M. S. Y. K. Walpole Ronald, Probability and
Statistics for Engineers and Scientists, 7th ed. Upper
Saddle River, New Jersey, USA: Pearson Education, Inc.,
2005.

% This program computes the Bit Error Rate of the Digital Communication System.
exp_freq=10000000;
% Transmitter
Tx = randi([0 1],1,exp_freq);
Rx = zeros(1,exp_freq);
array1= zeros(1,exp_freq);

%Makes an array of transmitted bits


%Initial values for the received bits

% Receiver
for n = 1:exp_freq
if Tx(n)==0 && rand<0.1
Rx(n)=1;
elseif Tx(n)==0 && rand>0.1 &&rand<=0.3
Rx(n)=0;
elseif Tx(n)==0 && rand>0.3 && rand<=0.7
Rx(n)=-1;
elseif Tx(n)==0 && rand>0.7 && rand<=0.9
Rx(n)=-2;
elseif Tx(n)==0 && rand>0.9
Rx(n)=-3;
End

%N(2) is introduced. Prob=0.1

if Tx(n)==1 && rand<0.1


Rx(n)=3;
elseif Tx(n)==1 && rand>0.1 &&rand<=0.3
Rx(n)=2;
elseif Tx(n)==1 && rand>0.3 && rand<=0.7
Rx(n)=1;
elseif Tx(n)==1 && rand>0.7 && rand<=0.9
Rx(n)=0;
elseif Tx(n)==1 && rand>0.9
Rx(n)=-1;
end

%N(2) is introduced. Prob=0.1

%N(1) is introduced. Prob=0.2


%N(0) is introduced. Prob=0.4
%N(-1) is introduced. Prob=0.2
%N(-2) is introduced. Prob=0.1

%N(1) is introduced. Prob=0.2


%N(0) is introduced. Prob=0.4
%N(-1) is introduced. Prob=0.2
%N(-2) is introduced. Prob=0.1

%implementing the Decision Rule:


if Rx(n)<0 && Tx(n)==1
array1(n)=4;
%4 as dummy for elements Rx= -1 -2 -3 with Tx= 1 1 1
elseif Rx(n)>=0 && Tx(n)==0
array1(n)=5;
%5 as dummy for elements Rx= 1 2 3 with Tx= 0(-1) 0(-1) 0(-1)
end
end
%Bit Error Rate Calculation
A=sum(array1==4);
% Rx=-1 -2 -3 ; Tx= 1 1 1
Aa=sum(Rx<0);
% Rx elements less than 0(-1)
BER=(A/Aa);
fprintf('Bit Error Rate = %f\n\n', BER)
%display stem plot of noise
X = linspace(2,-2, 5);
Y = [0.1, 0.2, 0.4, 0.2, 0.1];
stem (X,Y);
xlabel('Noise')
ylabel('Probability')
hold on
xlim([-3,3])
ylim([0,0.5]

Figure 3: Stem Plot. Noise vs. Probability

You might also like