RAMESH BABU - Probability Theory and Random Processes-MC GRAW HILL INDIA (2014)
RAMESH BABU - Probability Theory and Random Processes-MC GRAW HILL INDIA (2014)
and
Random Processes
ABOUT THE AUTHOR
P Ramesh Babu is currently working as Professor in the Department of Electronics and Instrumentation
Engineering in Pondicherry Engineering College, Puducherry. He secured a BTech degree from Sri
Venkateswara University, Tirupati, and an ME degree from Anna University, Chennai. He obtained his PhD
from Indian Institute of Technology (IIT) Madras. He has been in the teaching profession for more than 22
years, and has published several research papers in national as well as international journals. He is a life
member of ISTE.
Dr Ramesh Babu has written 10 books among which Digital Signal Processing, Signals and Systems
and Control Systems have been well received by the student community. His main areas of interest include
multivariate data analysis, digital signal processing and microprocessor-based system design.
Probability Theory
and
Random Processes
P Ramesh Babu
Professor and Head
Department of Electronics and Instrumentation Engineering
Pondicherry Engineering College
Puducherry
Information contained in this work has been obtained by McGraw Hill Education (India), from sources believed to be reliable.
However, neither McGraw Hill Education (India) nor its authors guarantee the accuracy or completeness of any information
published herein, and neither McGraw Hill Education (India) nor its authors shall be responsible for any errors, omissions, or
damages arising out of use of this information. This work is published with the understanding that McGraw Hill Education (India)
and its authors are supplying information but are not attempting to render engineering or other professional services. If such services
are required, the assistance of an appropriate professional should be sought.
Typeset at Text-o-Graphics, B-1/56, Aravali Apartment, Sector-34, Noida 201 301, and printed at
Cover Printer:
Dedicated to my Uncle
Late M Krishna Moorthy
CONTENTS
Preface xi
Many signals that we encounter in communication engineering and the data we obtain in computer science
engineering are random in nature. Even if the signals are deterministic, they may be corrupted by unwanted
signals known as noise. To model and analyze such random signals and their effect on a system’s performance,
engineers must have knowledge about probability theory and random processes. The book is designed for
students of undergraduate level, and aims to introduce the basic concepts of probability theory that are
required to understand probability models used in communication and computer science engineering. There
are several books on probability and random processes which cover the topics in random processes in depth
with less emphasis on problem solving. In this textbook, more emphasis has been given to improve the
problem-solving skills of students by providing a large number of solved problems.
Salient Features
∑ Important topics covered in detail—Bernoulli and Poisson Distributions; Chebyshev and Markov
Inequalities; Central Limit Theorem; Linear Regression; Stationary and Ergodic Random Processes;
Linear System Theory and Sources of Noise
∑ Mathematical models explained following step-by-step approach
∑ Application-based problems discussed aplenty
∑ Lucid discussions
∑ Rich pedagogy:
� Diagrams: 216
� Solved Examples: 809
� Practice Problems: 247
� Exercise Problems: 255
� Review Questions: 295
� Multiple-Choice Questions: 211
Chapter Organization
A more detailed description of the chapters is as follows.
Chapter 1 deals with basic concepts in probability, set theory, basic combinatorial analysis, conditional
probability, independent events and Bernoulli trials.
Chapter 2 discusses different types of random variables, cumulative distribution functions, probability
mass functions of discrete random variables, and probability density functions of continuous random
variables. It also introduces different types of discrete and continuous distributions and their applications.
These include the Bernoulli distribution, binomial distribution, Poisson distribution, geometric distribution,
xii Preface
Acknowledgements
Firstly, I would like to thank Prof. D Govindarajulu, Principal—Pondicherry Engineering College for
his encouragement in writing this book. I am indebted to Dr P Dananjayan, Professor of Electronics and
Communication Engineering for his valuable suggestions and guidance. I am grateful to Dr R Anandanatarajan,
Professor of Electronics and Instrumentation Engineering, for his constant encouragement and support.
Next, I would like to thank all those reviewers who took out time to review the manuscript and gave
important suggestions. Their names are given below.
Uma Rathore Bhatt Institute of Engineering and Technology, Devi Ahilya University,
Indore, Madhya Pradesh
P RAMESH BABU
Publisher’s Note
McGraw Hill Education (India) invites suggestions and comments from you, all of which can be sent to
[email protected] (kindly mention the title and author name in the subject line).
Piracy-related issues may also be reported.
1
BASIC CONCEPTS IN PROBABILITY
INTRODUCTION 1.1
The word probability literally means chance, a very commonly used word in day-to-day conversation. On
observing a cloudy sky we say that there is a chance of rain. At the end of the day, it may rain or may not. In
a cricket match between two teams, we often say that the chance of winning for one team is more than that
of the other team. The other terms that are very much close to chance are probably, likely, etc. These terms
are used in the context when there is uncertainty in the outcome of an event. Probability is a mathematical
measure for uncertainty, likelihood or chance. A mean of evaluating the uncertainty, likelihood and chance of
outcome resulting from a statistical experiment is called theory of probability.
The origin of theory of probability can be traced back to the seventeenth century. For the first time in
1654, two famous mathematicians, Blair Pascal and Pierre de Format formulated the principle of probability
theory. Later, the Dutch scientist Christan Huygens first published book on probability which lead to rapid
development of the subject in the 18th century. The scientists Fierre de Laplace, Chebyshev, Markov, Von
Mises, and Kolmogov stimulated the development of probability theory by the variety of its applications in
different fields like genetics, economics, etc.
Solved Problems
Solution
(a) A coin is tossed three times. Therefore, the sample space is
{HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
(b) If the coin shows a tail, the experiment is not conducted. So the sample space is
{T, H1, H21, H22, H23, H24, H25, H26, H3, H41, H42, H43, H45, H46, H5, H61, H62, H63, H64,
H65, H66}
Basic Concepts in Probability 1.5
1.2 From a group of 3 boys and 2 girls, two children are selected at random. Describe the events.
(a) A: Both selected children are boys.
(b) B: One boy and one girl.
(c) C: At least one girl is selected.
Which pairs of events are mutually exclusive?
Solution Let us denote the boys as B1, B2 and B3 and girls as G1 and G2. The sample space S is
S = {B1 B2, B1G1, B1G2, B2G1, B2G2, B3G1, B3G2, B1B3, B2B3, G1G2}
(a) A = {B1B2, B1B3, B2B3}
(b) B = {B1G1, B1G2, B2G1, B2G2, B3G1, B3G2}
(c) C = {G1G2, B1G1, B1G2, B2G1, B2G2, B3G1, B3G2}
From the events, we can say that the events A and B, A and C are mutually exclusive.
1.3 A box contains 1 black and 3 red balls. Two balls are drawn at random in succession without
replacement. What is the sample space?
Solution The box contain 1 black (B) and 3 red (R) balls.
The sample space S is
S {BR, RB, RR}
Practice Problems
1.1 A coin is tossed twice. If the second throw is heads, a dice is thrown. Write the sample space.
Ans. {TT, HT, TH1, TH2, TH3, TH4, TH5, TH6 HH1, HH2, HH3, HH4, HH5, HH6}
1.2 Two dice are thrown. Describe the following events:
A = The sum is multiple 4.
B = The sum is greater than 10.
C = The sum is a prime number.
Ans. A = [(3, 1), (2, 2), (6, 2), (1, 3), (5, 3), (4, 4), (3, 5), (2, 6), (6, 6)]
B = [(6, 5), (5, 6), (6, 6)]
C = [(1, 1), (2, 1), (4, 1) (6, 1), (1, 2), (3, 2), (5, 2), (2, 3),
(4, 3), (1, 4), (3, 4), (2, 5), (6, 5), (1, 6), (5, 6)
Solved Problems
1.4 A card is drawn from a pack of 50 cards numbered 1 to 50. Find the probability of drawing a number
which is a square.
Solution Given n(S) = 50
Given the event is a number which is a square.
A = {1, 4, 9, 16, 25, 36, 49}
n(A) = 7
n( A) 7
Required probability P(A) = =
n(S ) 50
1.5 A card is drawn from a well-shuffled deck of 52 cards. Find the probability of getting (a) a jack,
(b) a red card, (c) a diamond, and (d) a six of hearts.
1.6 Probability Theory and Random Processes
Solution
(a) The total number of cards n(S) = 52
Number of Jacks n(J) = 4
n( J ) 4 1
Required probability = = =
n(S ) 52 13
(b) Number of red cards n(R) = 26
n( R) 26 1
P(R) = = =
n(S ) 52 2
(c) Number of diamond cards n(D) = 13
n( D) 13 1
P(D) = = =
n(S ) 52 4
(d) Number of cards having six of heart = 1
1
Probability =
52
1.6 In a single throw of three dice, find the probability of getting a sum of 7.
Solution In throwing three dice, the total number of possible outcomes is n(S) = 63
Let A be the event of getting a sum 7. Then
A = {(1, 1, 5), (1, 2, 4), (1, 3, 3), (1, 4, 2), (1, 5, 1), (2, 1, 4), (2, 2, 3), (2, 3, 2), (2, 4, 1),
(3, 1, 3) (3, 2, 2), (3, 3, 1), (4, 1, 2), (4, 2, 1), (5, 1, 1)}
n(A) = 15
n( A) 15
P(A) = =
n( S ) 6 3
1.7 A coin is tossed four times. Find the chance that heads and tails show alternately
Solution
Sample space S = {HHHH, HHHT, HHTH, HHTT, HTHH, HTHT, HTTH, HTTT, THHH, THHT,
THTH, THTT, TTHH, TTHJ, TTTH, TTTT}
n(S) = 16
Number of favourable cases = 2
2 1
Probability = =
16 8
Practice Problem
1.3 Three coins are tossed once. Find the probability of getting
(a) exactly 2 tails, (b) 3 tails, (c) at least 2 tails, (d) exactly one head, and (e) no heads.
3 1 1 3 1
(a) (b) (c) (d) (e)
8 8 2 8 8
Basic Concepts in Probability 1.7
Definitions
Set A set is a collection of objects. Sets are usually designated by capital letters and specified by the con-
tents of two braces: { }. Examples are A = {1, 2, 3} and B = {2, 3, 5, 6}.
The first example describes the set consisting of positive integers 1, 2 and 3, and the second set B consists
of positive integers 2, 3, 5 and 6.
Another way of representing a set is by a statement or rule. For example, we might say that A consists of
all real numbers between 0 and 1 inclusive. This simply can be written as
A = {x/0 £ x £ 1}
Event The individual objects in a set are known as elements denoted by a lowercase letter. When “a” is a
member of A, we can write a Œ A denoting “a belongs to A”.
Empty Set A set with no elements is known as an empty set denoted by f.
Universal Set A set with all the elements for the problem under consideration is known as universal
set.
Subset If every element of the set A is also an element of the set B then A is said to be a subset of B, de-
noted as A Ã B
Set A is said to be equal to Set B if and only if A Ã B and B Ã A
Properties of Empty Sets and Universal Sets
∑ For every set A, we have f à A
∑ If U is a universal set then for every set A considered in the context of U, we have A Ã U.
∑ A set is said to be countable if its elements can be put in one-to-one correspondence with the natural
numbers. If a set is not countable, it is called uncountable.
∑ Two sets are said to be disjoint or mutually exclusive if they do not have any element in common.
∑ A set is finite if it has finitely many elements, e.g. A = { 1, 2}.
∑ A set is infinite if it has infinitely many elements. A set of all natural numbers is an infinite set.
∑ Two sets A and B are said to be equivalent of there is a one-to-one correspondence between A and B.
Commutative Law
(a) A»B=B»A (1.17a)
(b) A«B=B«A (1.17b)
Distributive Law
(a) A » (B « C) = (A » B) « (A « C) (1.18a)
(b) A « (B » C) = (A « B) » (A « C) (1.18b)
Associative Law
(a) A » (B » C) = (A » B) » C (1.19a)
(b) A « (B « C) = (A « B) « C (1.19b)
Also, we have
(a) A « f = f (1.20a)
(b) A » f = A (1.20b)
De Morgan’s Laws
(a) A« B= A» B (1.21a)
(b) A» B= A« B (1.21b)
(c) A= A (1.21c)
Solved Problems
1.8 State whether the following sets are countable or uncountable, or finite or infinite.
A = {2, 3}, B = {0 < x £ 10} C = {0 < integers}
D = {Number of students in a college}
E = {–30 £ x £ –1}
Solution
A and D are countable and finite.
C is countable and infinite.
B and E are uncountable and infinite.
1.9 Define a set A that contains all integers with magnitude not exceeding 5. Define a second set B having
even integers larger than 3 and not larger than 4. Determine if A Ã B and B Ã A.
Solution
A = {–5, –4, –3, –2, –1, 0, 1, 2, 3, 4, 5}
B = {–2, 0, 2, 4}
From the above, we know B Ã A
Solution
(a) (b) (c)
A B
(b) A « B « C = ( A » B » C ) (1.22b)
Solution
(a) A « ( B » C ) = ( A « B) » ( A « C )
= ( A « B) « ( A « C )
= ( A » B) « ( A » C )
(b) A1 » A2 » » AN = A1 « A2 « « AN (1.23b)
Basic Concepts in Probability 1.11
Solution
(a) Let A1 « A2 = B
Then
A1 « A2 « « AN = B « A3 « « AN
= B » A3 « A4 « « AN
= ( A1 » A2 ) » ( A3 « « AN )
Similarly, A1 « A2 « « AN = A1 » A2 » A3 » A4 » AN
(b) Let A1 » A2 = B
A1 » A2 » A3 » AN = B » A3 « AN
= B « A3 « AN
A1 » A2 = A1 « A2
fi A1 » A2 » A3 » AN = A1 « A2 « ( A3 » » AN )
= ( A1 « A2 ) « ( A3 » A4 « A5 » AN )
= ( A1 « A2 « A3 « A4 « ( A5 » AN )
If we continue this process
A1 » A2 » A3 » AN = A1 « A2 « A3 « AN
Practice Problems
1.4 Two sets are given by Ans. (a) {–3, –1, 1, 3} (b) {–6, –4, 4, 6}
A = {–3, –2, –1, 0, 1, 2, 3}
(c) {–6, –4, –3, –2, –1, 0, 1, 2, 3, 4, 6}
B = {–6, –4, –2, 0, 2, 4, 6}
(d) {–2, 0, 2}
Find (a) A – B (b) B – A, (c) A » B (iv) A « B.
1.5 Using Venn diagrams, prove that
(a) A» B = A« B
(b) A« B = A» B
1.6 Using a Venn diagram, show that the following identities are true:
(a) A« B«C = A» B»C
(b) ( A » B) « C = C - [( A « C ) » ( B « C )]
1.7 Shade Venn diagrams for the following sets:
(a) ( A « B) » C
(b) ( A « B) » C
(c) ( A » B) » (C « D)
1.12 Probability Theory and Random Processes
Solved Problems
1.14 The sample space of an experiment has 10 outcomes given by S = {a1, a2, …, aN} with equal
probability. Three events are defined as
A = {a1, a2, a3, a5} B = {a2, a4, a6}
C = {a6, a9}
Find the probability of (a) A » B (b) B » C (c) (A « (B » C)
1
Solution Given the probability of any event =
10
6 3
(a) P{A » B} = P{a1, a2, a3, a4, a5, a6} = =
10 5
(b) B » C = {a2, a4, a6} » {a1, a2, a3, a4, a5, a7, a8, a10}
= a1, a2, a3, a4, a5, a6, a7, a8, a10}
P( B » C ) = 9
10
(c) A « (B » C) = {a1, a2, a3, a5} « { a2, a4, a6, a9}
= {a2}
1
P(A « B » C) =
10
Solution
A B A
+ =
A«B A«B
Fig. 1.9
+ =
B
A«B A«B
Fig. 1.10
fi P ( A « B ) + P( A « B) = P ( A) + P( B) - 2 P( A « B) (1.29)
From Eq. (1.28) we can write
P ( A « B ) + P ( A « B) = P ( A » B) - P ( A « B) (1.30)
Substituting Eq. (1.30) in Eq. (1.29), we get
P( A » B) = P( A) + P( B) - P( A « B) (1.31)
Solution We have
P(A » B) = P(A) + P(B) – P(A « B)
If A and B are mutually exclusive events, A « B = f
P(A « B) = 0
Therefore, P(A » B) = P(A) + P(B)
Otherwise, P(A » B) < P(A) + P(B)
So we can say P(A » B) £ P(A) + P(B)
fi P ( A « B) = P ( B) - P ( A « B)
Similarly, we can write
A = ( A « B ) » ( A « B)
P(A) = P[( A « B ) » ( A « B)] = P( A « B ) + P( A « B)
P( A « B ) = P( A) - P( A « B)
P( A « B) = P(B) – P(A)
Since P( A « B) ≥ 0; P( A) £ P( B)
Basic Concepts in Probability 1.15
Practice Problem
Solved Problem
Solution
P( A « B ) = P( A » B) = 1 – P(A » B)
= 1 – [P(A) + P(B) – P(A « B)]
= 1 – P(A) – P(B) + P(A « B)
Practice Problems
Solved Problems
1.21 How many different 6-place license plates are possible if the first three places are English alphabets
and the final three are numbers.
Solution From the basic principle of counting the number of license plates, we have 26 ¥ 26 ¥ 26 ¥ 10 ¥
10 ¥ 10 = 17576000.
1.16 Probability Theory and Random Processes
1.22 Repeat the above problem under the assumption that no letter or number can be repeated in a single
license plate.
Solution Since a single person is to be selected from each group, the number of subcommittees is
2 ¥ 3 ¥ 4 = 24
1.9.2 Permutations
Consider the letters a, b and c. These letters can be arranged in 6 different ways as shown below:
abc, acb, bac, bca, cab, cba
Each arrangement is known as permutation. The first letter can be chosen from any three letters. Once we
choose the first letter, the second letter can be chosen from any two letters, and the third letter can be chosen
from the remaining 1.
Therefore, the number of permutations is 3 ¥ 2 ¥ 1 = 6. This can be expanded to n objects. The different
permutations of the n objects is
n(n – 1) (n – 2) … 3.2.1 = n! (1.35)
For example, in a cricket team with 11 players, the different batting orders are 11! = 39916800
Note: (i) In n objects of which n1 are alike, n2 are alike … nr are alike, the number of permutations is
n!
n1 ! n2 ! nr !
n!
(ii) The number of permutations of n objects taking r at time is n pr = (1.36)
(n - r )!
1.9.3 Combinations
For three letters a, b and c, the number of permutations is six. For permutations, the order in which the
letters are taken is important. Therefore, abc and acb be counted as two different permutations. However, in
combination, the order in which the letters are taken is not considered. Therefore, the number of combinations
with letters a, b and c is only one. The number of two-letter combinations that can be formed from a, b and c
Ê nˆ
are ab, bc, ca. The number of combinations of n things taken r at time is denoted by ncr or Á ˜
Ër ¯
Ê nˆ n!
ÁË r ˜¯ = (n - r )! r ! (1.37)
∑ The number of ways of dividing (p + q) items into two groups of p and q items respectively is
( p + q )!
(1.39)
p! q !
(2 p)!
∑ The number of ways of dividing 2p items into two equal groups of p each is where the two
groups have distinct identity. ( p !)2
(2 p)!
∑ The number of ways of dividing 2p items into two equal groups of p each is where the
groups do not have distinct identity. 2! ( p !)2
∑ The number of ways in which (p + q + r) things can be divided into three groups containing p, q and
r things respectively is
( p + q + r )!
(1.40)
p! q ! r !
(3 p)!
If p = q = r and three groups are distinct, the number of ways is
3! ( p !)3
∑ The number of circular arrangements of n distinct items is (n – 1)! if there is difference between
clockwise and anticlockwise arrangements and (n – 1)!/2 if there is no difference between clockwise
and anticlockwise arrangements.
Ê s - 1ˆ
∑ For x1 + x2 + … + xn = s where s ≥ 0, the number of positive integer solutions (when s ≥ n) is Á
Ë n - 1˜¯
Ê n + s - 1ˆ
and the number of non-negative integral solutions is Á .
Ë n - 1 ˜¯
Solved Problems
1.24 Six boys and six girls sit in a row randomly. Find the probability that
(a) the six girls sit together
(b) the boys and girls alternate.
Solution Six boys and six girls are sit randomly in 12! ways. If six girls sit together then we take them as
a single person and there are 7! ways. Among six girls, they can sit in 6! ways
7! 6!
The probability that six girls sit together =
12!
The boys and girls can sit alternately in 2 ¥ 6! ¥ 6! ways.
2 ¥ 6! ¥ 6!
Therefore, probability =
12!
1.25 If 10 people are to be divided into three committees of sizes 3, 3 and 4, how many divisions are
possible?
Hint: The number of possible divisions of n distinct objects into r distinct groups of sizes n1, n2, …, nr is
n!
.
n1 ! n2 ! nr !
1.18 Probability Theory and Random Processes
1.26 Kumar has 10 friends, of whom he will invite 6 for his birthday party. (a) How many choices has
he if 2 of the friends will not attend together? (b) How many choices has he if 2 of his friends will only
attend together.
Solution
(a) Total number of choices = Number of choices when both are not invited
+ Number of choices when one of them is invited
Ê 8ˆ
Number of choices when both are not invited = Á ˜
Ë 6¯
Ê 8ˆ
Number of choices when one of them is invited = 2 Á ˜
Ë 5¯
Ê 8ˆ Ê 8ˆ
fi Total number of choices = Á ˜ + 2 Á ˜
Ë 6¯ Ë 5¯
1.27 In how many ways can Surya who has 10 shirts, 6 pants and 2 shoes be dressed?
Ê 10ˆ Ê 6ˆ Ê 2ˆ
Solution Using multiplication rules, the number of ways is equal to Á ˜ Á ˜ Á ˜ = 120
Ë1 ¯ Ë1 ¯ Ë1 ¯
1.29 In a university examination, a student is to answer 5 out of 8 questions. How many choices does he
have? How many does he have if he must answer at least 2 of the first four questions?
Ê 8ˆ
Solution (a) From 8 questions, he has to select 5, therefore, he has Á ˜ choices.
Ë 5¯
(b) There are three possibilities for a student to satisfy the requirement.
(i) He can select the first four questions and remaining one question from the rest of four.
(ii) He can select 3 questions from the first four and 2 questions from the rest of five.
(iii) He can select 2 questions from the first four and 3 questions from the rest of five.
Basic Concepts in Probability 1.19
1.30 From 8 consonants and 6 vowels, how many words can be formed consisting of 5 different
consonants and 4 different vowels?
Ê 8ˆ
Solution From 8 consonants, 5 different consonant can be selected in Á ˜ ways. Similarly, 4 different
Ë 5¯
Ê 6ˆ
vowels can be selected from 6 vowels in Á ˜ ways. The resulting 9 different letters can be arranged among
Ë 4¯
themselves in 9! ways. Therefore, the number of words
Ê 8ˆ Ê 6 ˆ
= Á ˜ Á ˜ 9! = 304819200
Ë 5¯ Ë 4¯
1.31 How many different letter arrangements can be made from the letters (a) MALAYALAM, and
(b) MISSISSIPPI?
Solution
(a) The word MALAYALAM has a total of 9 letters. It consists of 2M, 4A, 2L. Therefore,
9!
Number of different letter arrangements = = 3780
2!4!2!
(b) The word MISSISSIPPI has a total of 11 letters out of which it has 4S, 4I, and 2P. The number of
11!
different letter arrangements is equal to = 34650 .
4!4!2!
1.32 In a university examination, a student has to answer 8 out of 10 questions. How many ways can he
answer? If he has to answer at least 3 out of the first five questions then how many choices does he have?
Ê 10ˆ
Solution Since the student has to answer 8 out of 10 questions, there are Á ˜ ways.
Ë 8¯
Ê 10ˆ 10!
ÁË 8 ˜¯ = 8!2! = 45
For the second case, he has to answer at least three questions out of the first five. There are three possible
ways.
(a) He can answer all questions from the first five and select three from the remaining five.
Ê 5ˆ Ê 5ˆ
So there are Á ˜ Á ˜ possible ways.
Ë 5¯ Ë 3¯
(b) He can answer 4 questions from the first five and select four from the remaining five.
Ê 5ˆ Ê 5ˆ
Hence, there are Á ˜ Á ˜ possible ways.
Ë 4¯ Ë 4¯
1.20 Probability Theory and Random Processes
Ê 5ˆ Ê 5ˆ
(c) He can answer 3 questions from the first five and select the remaining five. So there are Á ˜ Á ˜
possible ways. Ë 3¯ Ë 5¯
Hence, the total number of choices is
Ê 5ˆ Ê 5ˆ Ê 5ˆ Ê 5ˆ Ê 5ˆ Ê 5ˆ
ÁË 3˜¯ ÁË 5˜¯ + ÁË 4˜¯ ÁË 4˜¯ + ÁË 5˜¯ ÁË 3˜¯ = 10 + 25 + 10 = 45
1.33 Ramesh has 10 friends, and he decided to invite 6 of them to his birthday party. How many choices
does he have if two of the friends will not attend together? How many choices has he if two of his friends
will only attend together?
Ê 8ˆ
Solution Since two of the friends will not attend together, there are Á ˜ ways of inviting leaving the two and
Ë 6¯
Ê 8ˆ Ê 8ˆ Ê 8ˆ
2 Á ˜ ways of inviting leaving one of them. Therefore, the total number of choices = Á ˜ + 2 Á ˜ = 140
Ë 5¯ Ë 6¯ Ë 5¯
Ê 8ˆ
In the second case, two of his friends will only attend together. In this case, there are Á ˜ ways of inviting,
Ë 6¯
Ê 8ˆ Ê 8ˆ Ê 8 ˆ
leaving the two and Á ˜ ways of inviting them. Therefore, the total number of choices = Á ˜ + Á ˜ = 98
Ë 4¯ Ë 6¯ Ë 4¯
1.34 For an association, a president, vice president and secretary, all different, are to be chosen from 10
contestants. How many different choices are possible if
(a) there are no restrictions?
(b) Ramu and Ravi will not associate together?
(c) Surya and Harsha will serve together or not at all?
(d) Guptha will serve only if he is president?
Solution
(a) Since there are no restrictions the number of choices is 10 × 9 × 8 = 720.
(b) Without Ramu and Ravi, the number of choices is 8 ¥ 7 ¥ 6 and with one of them in the association
the number of choices is 3 ¥ 2 ¥ 8 ¥ 7. So the total number of choices
8 ¥ 7 ¥ 6 + (3 ¥ 2 ¥ 8 ¥ 7) = 672
(c) For this case, the association can be formed with both of them or without both.
When Harsha and Surya serve together then the number of choices is equal to 6 ¥ 8 = 48
When they are not in association, the number of ways to form the association 8 ¥ 7 ¥ 6. Therefore,
the total number of ways is
8 ¥ 7 ¥ 6 + 6 ¥ 8 = 384
(d) If Guptha is chosen as president then the number of ways to select the remaining posts is 9 ¥ 8 = 72.
If he is not chosen, the number of ways to select the posts is 9 × 8 × 7.
So, therefore, the total number of ways is
9 ¥ 8 + 9 ¥ 8 ¥ 7 = 576
Basic Concepts in Probability 1.21
1.35 How many different 7-place licence plates are possible if the first 2 places are for letters and the
other 5 for numbers. How many licence plates would be possible if repetition among letters or numbers
were prohibited?
Solution For each letter, we have 26 choices and for each number, we have 10 choices. Therefore, there
would be
26 ¥ 26 ¥ 10 ¥ 10 ¥ 10 ¥ 10 ¥ 10 = (26)2(10)5 = 67600000 different plates possible.
If repetition is not allowed then for the first-place letter, we have 26 choices and for the second-place
letter, we have 25 choices. Similarly, from the first-place number, the number of choices decreases by one for
each place. Hence, there would be 26 ¥ 25 ¥ 10 ¥ 9 ¥ 8 ¥ 7 ¥ 6 different plates possible.
1.36 Show
(a) In how many ways can 4 boys and 4 girls sit in a row?
(b) If 4 boys and 4 girls sit in a row in such a way that the boys and girls sit together then how many
ways are possible?
(c) In the above case, if the boys alone must sit together then how many ways are possible?
(d) In how many ways can they sit if no boys or girls are allowed to sit together?
Solution
(a) Since there is no restriction, there would be 8! ways.
(b) Since 4 boys and 4 girls each sit together, there are two possible orderings. The boys can sit first and
then girls or the girls can sit first and then boys.
Among boys, the number of possible ways is 4! and similarly, among girls the number of ways is 4!
Therefore, the total number of ways is 2(4! ¥ 4!)
(c) There are four possible ways that the boys as a group can occupy the position. Among each group
there are 4! ways. Therefore, the total number of ways is 4(4! ¥ 4!)
BGGG
GBGG
GGBG
GGGB
(d) The boys and girls can sit like this when the same sexes are not allowed to sit together:
BGBGBGBG
GBGBGBGB
Among boys and girls there are 4! ways. Therefore, the total number of ways is 2(4! ¥ 4!)
1.37 In how many ways can 3 fiction novels, 2 physics books and 1 mathematics book be arranged?
(a) The books can be arranged in any order.
(b) The fiction novels must be together but other books can be arranged in any order.
Solution
(a) There are 6! ways the books can be arranged.
(b) The fiction novels must be together. So they can be arranged as a group in 4 ways:
F — — —
— F — —
1.22 Probability Theory and Random Processes
— — F —
— — — F
Among fiction novels, the number of ways is 6 and among the remaining books, the number of ways is 6.
Therefore, total number of ways is 4(6 ¥ 6) = 144.
Solution
(a) Since there are no restrictions, the number of ways is 10! = 3628800
(b) If X and Y sit in XY order then the number of choices in which they can sit together is (n – 1).
Similarly, if they sit in YX order, the number of choices is (n – 1). So the total number of choices
they sit together is 2(n – 1). For each choice, the number of ways the eight people can sit is 8!.
Hence, the total number of ways is 2(n – 1)8!
2(9)8! = 18 ¥ 81 = 725760
(c) If men and women sit next to each other, they can sit like this.
M W M W ...
W M W M ...
Among men, the number of choices is 5! and among women the number of choices is 5! Therefore,
the total number of ways is 2 ¥ 5! ¥ 5! = 28,800
(d) 6 men occupy the seats in the following ways:
6M W W W W
W 6M W W W
W W 6M W W
W W W 6M W
W W W W 6M
Among 6 men, the number of choices 6! and among 4 women, the choices is 4! Therefore, the total
number of ways is 5 ¥ (6!) ¥ (4!) = 86,400
(e) 5 married couples can be seated as a group in 5! ways and among them 25 ways. Therefore, the
number of ways is 5!(25)
Solution
(a) In a 52-card deck, the number of cards with king is 4. Therefore,
4 1
P[king] = =
52 13
Basic Concepts in Probability 1.23
1.40 In a box, there are 250 coloured balls: 50 black, 40 green, 70 red, 60 white and 30 blue. What are
the probabilities of selecting a ball of each colour?
1.41 An experiment consists of rolling a single dice. Two events are defined as
A = {a 6 shows up} and
B = {a 2 or a 5 shows up}
(a) Find P(A) and P(B). (b) Define a third event C so that, P(C) = 1 – P(A) – P(B).
1.42 A dice is tossed. Find the probabilities of the event A = {odd number shows up}, B = {number
larger than 3 shows up}.
1.43 When two dice are thrown, determine the probabilities from Axiom 3 for the following events.
(a) A = {Sum = 7}, (b) B = {8 < sum £ 11}, (c) C = {10 < sum} and determine (d) P[B « C], and
(e) P[B » C].
Solution When two dice are thrown, the sample space S contains 62 = 36 points as in Solved Problem
1.42. That is, n(S) = 36
(a) For Event A, the number of outcomes = 6
Therefore,
6 1
P(Sum = 7) = =
36 6
(b) B = {8 < Sum £11}
The possible outcomes for Event B are (3, 6), (4, 5), (5, 4), (6, 3), (4, 6), (5, 5), (4, 6), (6, 5), (5, 6)
That is, nB = 9
nB 9 1
P(B) = = =
n(s) 36 4
(c) For Event C, the possible outcomes are (5, 6) (6, 5) (6, 6). That is, nc = 3
3 1
P(C) = P(10 < sum) = =
36 12
(d) For Event B « C, that P[8 < sum £ 11 and 10 < sum], the possible outcomes are (6, 5) and (5, 6).
n(B « C) = 2
2 1
P(B « C) = =
36 18
(e) For Event B » C, the number of possible outcomes is 10.
They are (3, 6), (4, 5), (4, 6), (5, 4), (5, 5), (5, 6), (6, 3), (6, 4), (6, 5) and (6, 6)
10 5
P(B » C) = =
36 18
1.44 If two dies are rolled, what is the probability that the sum of the upturned faces will equal 9?
Basic Concepts in Probability 1.25
Ê 9ˆ
Solution The total number of balls is equal to 9. Two balls are drawn, therefore, there would be Á ˜ = 36
Ë 2¯
points in the sample space. Since all outcomes are equally likely, the number of ways a white ball is drawn
from 5 white balls is 5. Similarly, the number of ways a black ball is drawn from 4 black balls is 4. Hence,
the desired probability is
5 ¥ 4 20 5
= =
36 36 9
1.46 A committee of 6 is to be selected from a group of 6 men and 8 women. If the selection is made
randomly, find the probability that the committee consists of 3 men and 3 women.
Solution The committee is to be selected from a group of 14 people. A committee of 6 can be selected from
Ê 14ˆ
14 persons in Á ˜ possible ways. Since each of the combinations is equally likely, 3 men can be selected
Ë6 ¯
Ê 6ˆ Ê 8ˆ
from 6 men in Á ˜ combinations, and 3 women can be selected from 8 women in Á ˜ combinations. Now
Ë 3¯ Ë 3¯
Ê 6ˆ Ê 8ˆ
ËÁ 3¯˜ ËÁ 3¯˜
the probability is equal to = 0.3729 .
Ê 14ˆ
ÁË 6 ˜¯
1.47 A bag contains 5 white, 7 red and 3 black balls. If three balls are drawn at random, what is the
probability that none of them is red?
Ê 15ˆ
Solution 3 balls can be selected from 15 balls in Á ˜ ways. Three balls without red can be drawn like
below Ë 3¯
Ê 5ˆ
(a) All white balls in Á ˜ ways, that is 10 ways.
Ë 3¯
1.26 Probability Theory and Random Processes
Ê 5ˆ Ê 3ˆ
(b) Two white and one black ball in Á ˜ Á ˜ = 30 ways.
Ë 2¯ Ë 1¯
Ê 5ˆ Ê 3ˆ
(c) One white and two black balls in Á ˜ Á ˜ = 15 ways.
Ë 1¯ Ë 2¯
Ê 3ˆ
(d) All black balls Á ˜ = 1 way.
Ë 3¯
So required probability
10 + 30 + 15 + 1 56 56 8
= = =
Ê 15ˆ 15 ¥ 14 ¥ 13 35 ¥ 13 65
ÁË 3 ˜¯ 2!
1.48 A box contains 4 point-contact diodes and 6 alloy junction diodes. What is the probability that 3
diodes picked at random contain at least two point contact diodes?
Solution Let p be the event of selecting point contact diode and A be the event of selecting alloy junction
diode. The number of ways by which at least two point-contact diodes can be selected from three diodes is
(a) Two point contact diodes and one alloy junction diode.
(b) All three point contact diodes.
P(at least two point contact diodes) = P(p = 2, A = 1) + P(p = 3, A = 0)
Ê 10ˆ
Three diodes can be selected from 10 diodes in Á ˜ ways.
Ë 3¯
Ê 4ˆ Ê 6ˆ Ê 4ˆ Ê 6 ˆ
ÁË 2˜¯ ÁË 1˜¯ ÁË 3˜¯ ÁË 0˜¯
P = +
Ê 10ˆ Ê 10ˆ
ÁË 3 ˜¯ ÁË 3 ˜¯
36 4 1
= + =
120 120 3
1.49 Form a group of 3 Indians, 4 Pakistanis and 5 Americans a subcommittee of 4 people is selected by
lots. Find the probability that the subcommittee will consist of
(a) 2 Indians and 2 Pakistanis
(b) 1 Indian, 1 Pakistani and 2 Americans
(c) 4 Americans
Ê 12ˆ
Solution The group contains 12 people and a subcommittee of 4 people can be selected in Á ˜ ways.
Ë 4¯
Basic Concepts in Probability 1.27
Ê 3ˆ Ê 4ˆ
(a) Two Indians and two Pakistanis can be selected in Á ˜ and Á ˜ ways respectively. Therefore, the
Ë 2¯ Ë 2¯
Ê 3ˆ Ê 4ˆ
ÁË 2˜¯ ÁË 2˜¯ 18
probability is equal to =
Ê 12ˆ 495
ÁË 4 ˜¯
Ê 3ˆ Ê 4ˆ Ê 5ˆ
(b) 1 Indian, 1 Pakistani and 2 Americans can be selected in Á ˜ Á ˜ Á ˜ ways. Therefore, the
Ë 1¯ Ë 1¯ Ë 2¯
Ê 3ˆ Ê 4ˆ Ê 5ˆ
ÁË 1˜¯ ÁË 1˜¯ ÁË 2˜¯ 120
probability = =
Ê 12ˆ 495
ÁË 4 ˜¯
Ê 5ˆ
(c) Four Americans can be selected from a group of 5 in Á ˜ ways.
Ë 4¯
Ê 5ˆ
ÁË 4˜¯ 5
Therefore, probability = =
Ê 12ˆ 495
ÁË 4 ˜¯
1.50 If each element of a second-order determinant is either zero or one, what is the probability that the
value of the determinant is positive?
Solution The number of elements in a 2 ¥ 2 determinant is 4 and each element can take 2 values. Therefore,
the total number of determinants is 24 = 16. Out of 16, the following matrix determinants produce a positive
value.
1 0 1 1 1 0
, , .
1 1 0 1 1 1
3
Therefore, the probability =
16
1.51 An urn contains 3 red and 6 blue balls. Two balls are drawn at random with replacement. Find the
probability of getting (a) 2 red balls, (b) two blue balls, and (c) one red and one blue ball.
Solution
(a) The total number of balls in an urn is 9. The probability of drawing a red ball from 9 balls is
3 1
= .
9 3
1
Since the ball is replaced, the probability of drawing a red ball the second time is also .
3
1.28 Probability Theory and Random Processes
2
Ê 1ˆ 1
So the probability of drawing two red balls with replacement is Á ˜ = .
Ë 3¯ 9
2 2
Ê 6ˆ Ê 2ˆ 4
(b) Similarly, the probability of drawing two blue balls with replacement is Á ˜ = Á ˜ =
Ë 9¯ Ë 3¯ 9
(c) The probability of drawing one red ball and one blue ball can be done in two ways:
Probability of drawing first red ball and then blue ball + Probability of drawing first blue ball and
then red ball
1 Ê 2ˆ 2 Ê 1ˆ 2 2 4
+ = + =
3 ÁË 3 ˜¯ 3 ÁË 3 ˜¯ 9 9 9
=
1.52 A box contains 3 black balls and 6 green balls. One ball at a time is drawn at random, its colour is
noted and the ball is replaced in the box for the next draw. Find the probability that the first green ball is
drawn on the third draw.
Solution Let G denote drawing a green ball and B denote drawing a black ball. The total number of balls
in the sample space is 9. Since the first green ball is drawn in the third draw, the first two draws are black
Ê 3ˆ
ÁË 1˜¯ 3 1
balls. The probability of drawing a black ball in the first draw is = = .
Ê 9ˆ 9 3
ÁË 1˜¯
Since the ball is replaced, the probability of drawing a black ball in the second draw is
Ê 3ˆ
ÁË 1˜¯ 3 1
= =
Ê 9ˆ 9 3
ÁË 1˜¯
1.11 How many different words can be formed by using all the letters of the word “ALLAHABAD”?
(Ans. 7560)
Basic Concepts in Probability 1.29
1.12 A group consists of 4 girls and 7 boys. In how many ways can a team of 5 members be selected if the team has
(i) at least one boy and one girl (ii) at least 3 girls (Ans. 441, 91)
1.13 A bag contains 5 black and 6 red balls. Determine the number of ways in which 2 black and 3 red balls can be
selected. (Ans. 200)
1.14 Among 14 players, 5 are bowlers. In how many ways a team of 11 may be formed with at least 4 bowlers?
(Ans. 204)
1.15 Five cards are drawn from a well-shuffled pack of 52 cards. Find the probability that all the five cards are hearts.
Ê 33 ˆ
ÁË Ans. 66640 ˜¯
1.16 Two dice are thrown together. What is the probability that the sum of two members is divisible by 3 or 4?
(Ans. 5/9)
1.17 Three letters are dictated to three persons and an envelope is addressed to each of them, the letters are inserted
into the envelope at random so tht envelope contains exactly one letter. Find the probability that at least one letter is in its
proper envelope. (Ans. 2/3)
1.18 A box contains 100 bulbs, 20 of which are defective. 10 bulbs are selected at random, what is the probability that
at least one is defective. (Ans. 0.9048)
Solved Problems
1.53 Two dice are thrown together. Find the probability that the sum of the numbers on the two faces is
divisible by 3 or 4.
Solution Let A be the event that the sum be divisible by 3, and B be the event that the sum is divisible
by 4.
Two dice can be thrown in 62 = 36 ways. That is, n(s) = 36.
Out of 36 possible ways, the number of ways that the sum can be divided by 3 is
(1, 2), (2, 1), (5, 1), (1, 5), (2, 4), (4, 2), (3, 3), (6, 3), (3, 6), (4, 5), (5, 4), (6, 6) = 12 ways
The number of ways that the sum is divisible by four (1, 3), (3, 1), (2, 2), (2, 6), (6, 2), (3, 5), (5, 3), (4, 4),
(6, 6) = 9 ways
The number of ways that the sum is divisible by 3 and 4 is (6, 6) = 1 way
12 1
We have P(A) = =
36 3
9 1
P(B) = =
36 4
1
P(A « B) =
36
P(A » B) = P(A) + P(B) – P(A « B)
1 1 1 20 5
= + - = =
3 4 36 36 9
1.54 Determine the probability of the card being either red or a king when one card is drawn from a
regular deck of 52 cards.
Solution Let A be the event of drawing a red card and B be the event of drawing a king. Then A « B be
the event of drawing a card which is red and a king.
Out of the 52 cards, the number of red cards is 26.
26
Therefore, P[A] =
52
The number of cards with king is 4. Therefore,
4
P[B] =
52
2
Also, P[A « B] =
52
P[A » B] = P[ A] + P[ B] - P[ A « B]
26 4 2 28
= + - =
52 52 52 52
Basic Concepts in Probability 1.31
Practice Problems
1.19 One number is chosen from numbers 1 to 200. What is the probability that it is divisible by 4 or 6?
(Ans. 67/200)
1.20 In a class of 60 students 30 opted for computer networks (CN) as an elective, 32 opted for operating systems (OS)
and 24 opted for both computer networks and operating systems. If one of these students is selected at random, what is
the probability that (i) the student opted for CN or OS (ii) the student opted OS but not CN. (Ans. (a) 19/30, (ii) 2/15)
P(A1 « A3) = P ( A1 « A2 « A3 ) + P( A1 « A2 « A3 )
P( A1 « A3 P( A1 « A2 « A3 ) P( A1 « A2 « A3 )
= +
P( A3 ) P( A3 ) P( A3 )
P(A1|A3) = P( A1 « A2 | A3 ) + P( A1 « A2 | A3 )
The conditional probability P(A|B) can be computed in two ways:
∑ Considering the probability of A with respect to the reduced sample space B
∑ Computing P(A « B) and P(B) with respect to the original sample space S
Solved Problems
1.55 A dice is rolled. If the outcome is an odd number, what is the probability that it is prime?
Solution Let A be the event of getting a prime number and B be the event of getting an odd number. The
universal sample space
S = {1, 2, 3, 4, 5, 6}
A = {2, 3, 5}
B = {1, 3, 5}
A « B = {3, 5}
3 1 3 1
P(A) = = ; P( B) = =
6 2 6 2
2 1
P(A « B) = =
6 3
P ( A « B) 1/3 2
P(A | B) = = =
P ( B) 1/2 3
1.56 A pair of dice is thrown. Find the probability of getting 7 as sum, if it is known that the second dice
always exhibits an odd number.
Solution Let A be the event of getting the sum as 7 and B be the event of getting the second dice as an
odd number. n(s) = 36
A = { (1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)}; n(A) = 6
n( A) 6 1
P(A) = = =
n(S ) 36 6
B = {(1, 1), (1, 3), (1, 5), (2, 1), (2, 3), (2, 5), (3, 1), (3, 3) (3, 5), (4, 1), (4, 3), (4, 5),
(5, 1), (5, 3), (5, 5), (6, 1), (6, 3), (6, 5)}
n(B) = 18
18 1
P(B) = =
36 2
3 1
P(A « B) = =
36 12
1.34 Probability Theory and Random Processes
P( A « B) 1/12 1
P(A | B) = = =
P ( B) 1/2 6
1.57 A couple has two children. Find the probability that (a) both the children are boys, if it is known
that the older is a boy, and (b) both the children are boys, if it is known that at least one of the children is
a boy.
Solution The couple can have two children in four different ways:
A = {B1B2, B1G2, G1B2, G1G2}
(a) Let B be the event of both children being boys
B = {B1B2}
Let C be the event of the older being a boy:
C = {B1B2, B1G1}
B « C = {B1B2}
n( B « C ) 1
P(B | C) = =
n(C ) 2
(b) Let D be the event that at least one of the children is a boy:
D = { B1B2, B1G2, G1B2}
P( B « D) 1/6 1
P(B | D) = = =
P( D) 3/6 3
1.58 A fair dice is rolled. Consider the events
A = {1, 3, 5} B = {1, 2,} and C = {1, 3, 4, 5}
Find (a) P(A | B) (b) P(B | A) (c) P(A | C) (d) P(C | A) (e) P(A » B | C)
Solution The sample space
S = {1, 2, 3, 4, 5, 6}; Given A = {1, 3, 5}
B = {1, 2}; C = {1, 3, 4, 5}
We can find n(S) = 6, n(A) = 3; n(B) = 2; n(C) = 4
3 1 2 1 4 2
P(A) = = ; P(B) = = ; P(C) = =
6 2 6 3 6 3
1 3 1
P(A « B) = ; P(A « C) = =
6 6 2
1 4 3 1
P(B « C) = ; P(A » B) = = ; P(A « B « C) =
6 6 2 6
P( A « B) 1/6 1
(a) P(A | B) = = =
P( A) 1/3 2
P( A « B) 1/6 1
(b) (P(B | A) = = =
P( A) 1/2 3
P( A « C ) 1/2 3
(c) P(A | C) = = =
P(C ) 2/3 4
P( A « C ) 1/2
(d) P(C | A) = = =1
P( A) 1/2
Basic Concepts in Probability 1.35
P{( A » B) « C} P{( A « C ) » ( B « C )}
(e) P(A » B | C) = =
P(C ) P(C )
P( A « C ) + P( B « C ) - P ( A « B « C )
=
P(C )
1 1 1 1
+ -
3
= 2 6 6 = 2 =
2 2 4
3 3
1.59 A fair dice is rolled. Consider the events
A = {1, 3, 6} B = {1, 2, 4} and C = 1, 3, 4, 5}
Find (a) P(A | B) (b) P(B | A) (c) P(A | C) (d) P(C | A) (e) P(A » B | C) (f) P(A « B | C)
Solution
First method The sample space
S = {1, 2, 3, 4, 5, 6}
Given A = {1, 3, 6}; B = {1, 2, 4} C = {1, 3, 4, 5}
n( A « B) 1
(a) P(A | B) = =
n( B ) 3
n( A « B) 1 A « B = {1}
(b) P(B | A) = =
n( A) 3 A « C = {1, 3}
n( A « C ) 2 1 B « C = {1, 4}
(c) P(A | C) = = =
n(C ) 4 2 A » B = {1, 2, 3, 4, 6}
n( A « C ) 2 A » B » C = {1, 3, 4}
(d) (P(C | A) = =
n( A) 3 A « B « C = {1}
n{A » B « C ) 3
(e) P(A » B | C) = =
n(C ) 4
n( A « B « C ) 1
(f) P(A « B | C) = =
n(C ) 4
Second method
3 1 3 1 4 2
P(A) = = ; P( B) = = ; P(C ) = =
6 2 6 2 6 3
1 2 1 2 1
P(A « B) = ; P( A « C ) = = ; P( B « C ) = =
6 6 3 6 3
5 1
P(A » B) = ; P( A « B « C ) =
6 6
P( A « B) 1/6 1
(a) P(A | B) = = =
P ( B) 1/2 3
P( A « B) 1/6 1
(b) P(B | A) = = =
P( A) 1/2 3
1.36 Probability Theory and Random Processes
P( A « C ) 1/3 1
P(A|C) = = =
P(C ) 2/3 2
P( A « C ) 1/3 2
P(C|A) = = =
P( A) 1/2 3
P{A » B) « C} P( A « C ) » P( B « C )
P(A » B|C) = =
P(C ) P(C )
P( A » C ) + P( B « C ) - P( A « B « C )
=
P(C )
1 1 1
+ -
3
= 3 3 6 =
2 4
3
P{( A « B) « C}
P(A « B | C) =
P(C )
P( A « B « C ) 1/6 1
= =
P(C ) 2/3 4
1.60 One ticket is selected at random from 50 tickets numbered 00, 01, 02, …, 49. What is the probability
that the sum of the digits on the selected ticket is 8 given that the product of these digits is zero?
Solution Let A be the event in which the sum of the digits is 8 and B be the event in which the product
of the digits is zero.
The sample space of the event B = {00, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10, 20, 30, 40}
14 1
P(B) = ; P ( A « B) =
50 50
P( A « B) 1/50 1
P(A|B) = = =
P ( B) 14/50 14
1.61 A mechanical system consists of two subsystems A and B. From the following probabilities, find:
(a) P(A fails | B has failed)
(b) P(A fails alone)
P(A fails) = 0.20, P(B fails alone) = 0.15
P(A and B fail) = 0.15
Solution
Given, P(A) = 0.2,
P( A « B) = 0.15; P( A « B) = 0.15
We have
P( A « B) = P(B) – P(A « B)
Basic Concepts in Probability 1.37
1.62 In a school, there are 500 students out of which 230 are girls. It is known that out of 230, 10% of
the girls study in Class X. What is the probability that a student chosen at random studies in class X given
that the chosen student is a girl.
Solution Let A be the event that a student chosen at random studies in Class X and B be the event of
selecting a girl. Now we have to find the probability that the selected student is from Class X given that the
chosen student is a girl. That is, P(A | B).
n( A « B)
P(A|B) =
n( B )
n(A « B) = 10% of 230 = 23
n(B) = 230
23 1
P(A|B) = = = 0.1
230 10
1.63 Two cards are drawn from a well-shuffled pack of 52 cards without replacement. What is the
probability that one is a red queen and the other is a black king?
Solution Let Ri be the event of drawing red queen in the ith draw and Ki be the event of drawing black
king in the ith draw. The king and queen cards can be drawn in two ways. The king can be drawn on the first
draw and the queen in the second draw or the queen can be drawn in the first draw and the king in the second
draw.
Therefore, the required probability is
= P(R1 « K2) » P(K1 « R2)
= P(R1 « K2) + PK1 « R2)
= P(K1) P(K2 | R1) + P(K1) | (R2 | K1)
Now,
2 2
P(R1) = , P( K 2 | R1 ) =
52 51
2 2
P(K1) = , P( R2 | K1 ) =
52 51
2 2 2 2 2
Probability = ¥ + ¥ =
52 51 52 51 663
1.64 Cards are numbered 1 to 21. Two cards are drawn one after the other. Find the probability that the
first card is a multiple of 6 and the other is a multiple of 10.
1.38 Probability Theory and Random Processes
Solution Two cards can be drawn in the following mutually exclusive ways.
(a) First card bears a multiple of 6 and the second bears a multiple of 10
(b) First card bears a multiple of 10 and the second bears a multiple of 6
Let A1 be the event of the first card drawn bearing a multiple of 6. A2 be the event of the second card drawn
bearing in multiple of 6. B1 be the event of the first card drawn bear a multiple of 10 . B2 be the event of the
second card drawn bearing multiple of 10.
Therefore, the probability
= P{(A1 « B2) » (B1 « A2)}
= P( A1 « B2) + P(B1 « A2)
= P(A1) P(B2 | A1) + P(B1) P(A2 | B1)
3 2
P(A1) = ; P( B2 | A1 ) =
21 20
2 3
P(B1) = ; P( A2 | B1 ) =
21 20
3 Ê 2ˆ 2 Ê 3ˆ
fi +
21 ÁË 20 ˜¯ 21 ÁË 20 ˜¯
Probability =
6 2
= =
210 70
1.65 From a bag containing 4 white and 6 black balls, two balls are drawn at random. If the balls are
drawn one after the other without replacement, find the probability that
(a) Both balls are white
(b) Both balls are black
(c) The first ball is white and the second ball is black
(d) One ball is white and the other is black
Solution The total number of balls in the bag is equal to ten. The number of white balls is four and the
number of black balls is six.
(a) Let W be the event of drawing the first white ball and W2 be the event of drawing the second white
ball.
Total number of white balls 4 2
P(W1) = = =
Total number of balls 10 5
Ê 3ˆ
ÁË 1˜¯ 3 1
P(W2 | W1) = = =
Ê 9ˆ 9 3
ÁË 1˜¯
4 Ê 3ˆ 2
P(W1 « W2) = P(W1) P(W2 | W1) = Á ˜=
10 Ë 9 ¯ 15
(b) Let B1 be the event of the drawing first black ball and B2 be the event of drawing second black
ball.
Basic Concepts in Probability 1.39
Ê 6ˆ
ÁË 1˜¯ 6 3
P(B1) = = =
Ê 10ˆ 10 5
ÁË 1 ˜¯
5
P(B2 | B1) =
9
3 5 1
P(B1 « B2) = P(B1) P(B2 | B1) = ◊ =
5 9 3
(c) Let W1 be the event of drawing the first white ball and B2 be the event of drawing the second ball as
black.
4 2
P(W1) = =
10 5
6 2
P(B2 | W1) = =
9 3
2 Ê 2ˆ 4
P(W1 « B2) = P(W1) P(B2 | W1) = Á ˜ =
5 Ë 3 ¯ 15
(d) Probability of one ball is white and other is black it can happen in two ways: the first one is white
and the second ball is black, or the first one is black and the second one is white.
P(W1 « B2) + P(B1 « W2)
4
P(W1 « B2) =
15
P(B1 « W2) = P(B1) P(W2 | B1)
Ê 6 ˆ Ê 4ˆ 4
= Á ˜Á ˜=
Ë 10 ¯ Ë 9 ¯ 15
4 4 8
P(W1 « B2) + P(B1 « W2) = + =
15 15 15
1.66 A student cannot qualify for the interview if he fails in subjects A and B. The probabilities that he
fails in A and B are known to be 0.01 and 0.03 respectively. It is also known that he is more likely to fail in
subject B with probability 0.06 if he failed in A.
(a) What is the probability that he cannot qualify for the interview?
(b) What is the probability that he fails in A given that he failed in B?
0.0006
= 0.02
0.03
1.67 In a box, there are 100 resistors having resistances and tolerances as shown in Table 1.3. A resistor
is chosen with same likelihood of being chosen for the three events: A as “draw a 47 W resistor”, B as “draw
a 5% tolerance resistor” and C as “draw a 100 W resistor”. Determine joint probabilities and conditional
probabilities.
Resistance in W Tolerance
5% 10% Total
22 10 14 24
47 28 16 44
100 24 8 32
Total 62 38 100
28
P(B|A) = P(drawing a 5% tolerance resistor from 47 W resistors) = = 0.636
44
P(C|A) = 0
24
P(C|B) = (drawing a 100 W resistors with 5% tolerance resistor) = = 0.387
62
Practice Problems
1.21 A box contains 5 red and 4 white balls. Two balls are drawn successively from the box without replacement and it
is noted that the second one is white. What is the probability that the first is also white. (Ans. 3/8)
1.22 If the probability that a communication system has high selectivity is 0.54 and the probability that it will have high
fidelity is 0.81 and the probability that it will have both is 0.18. Find the probability that a system with high fidelity will
have high selectivity. (Ans. 0.22)
Solved Problems
1.68 A bag contains 6 white and 9 black balls, Two balls are drawn in succession without replacement.
What is the probability that the first is white and the second is black?
Solution Consider A be the event of getting a white ball in the first draw and B be the event of getting a
black ball in the second draw.
1.42 Probability Theory and Random Processes
The total number of balls is 15 and the probability of getting a white in the first draw is
Ê 6ˆ
ÁË 1˜¯ 6 2
P(A) = = =
Ê 15ˆ 15 5
ÁË 1 ˜¯
Probability of getting black in the second draw when a white ball has already been drawn in the first draw
is.
Ê 9ˆ
ÁË 1˜¯ 9
P(B | A) = =
Ê 14ˆ 14
ÁË 1 ˜¯
The required probability of getting a white ball in the first draw and a black ball in the second draw
P(A « B) = P(A) P(B | A)
2 Ê 9ˆ 9
◊ =
5 ÁË 14 ˜¯ 35
=
1.69 Three cards are drawn successively without replacement from a pack of 52 well-shuffled cards.
What is the probability that first two cards are kings and the third draw is an ace?
Solution The cards are drawn successively without replacement. Let K1 and K2 are the events of drawing
a king in the first and second draw respectively and A3 be the event of drawing an ace in the third draw.
4
P(K1) = P(first card is king) =
52
3
P(K2 | K1) = P(second card drawn is king given that first card is also king) =
51
P(A3 | K1 « K2) = P(third card drawn is ace given that first two cards are kings) = 4/50
P(K1 « K2 « A3) = P(K1) P(K2 | K1) P(A3 | K1 « K2)
4 3 4 2 2
= ◊ ◊ = =
52 51 50 17 ¥ 13 ¥ 25 5525
1.70 A jar contains two white and three black balls. A sample of size 4 is made. What is the probability
that the sample is in the order {white, black, white, black}?
Solution The jar contains two white and three black balls. We consider the problem without
replacement.
Let W be the event of selecting a white ball and B be the event of selecting a black ball. The suffix to the
letter W and B is the order in which the ball is drawn. Then
P(W1 « B2 « W3 « B4) = P(W1) ◊ P(B2 | W1) ◊ P(W3 | B2 « W1) ◊ P(B4 | W1 « B2 « W3)
Number of white balls 2
P(W1) = =
Total number of balls 5
Basic Concepts in Probability 1.43
3
P(B2 | W1) =
4
(∵ since a ball is drawn and not replaced, the total number of balls is 4)
1
P(W3 | B2 « W1) =
3
2
P(B4 | W1 « B2 « W3) = = 1
2
2 Ê 3ˆ Ê 1ˆ
P(W1 « B2 « W3 « B4) =
3 ÁË 4 ˜¯ ÁË 3 ˜¯
(1) = 0.1
1.71 In a certain group of engineers, 60% have insufficient background of information theory, 50% have
inadequate knowledge of probability, and 80% are in either one or both of adequate categories. What is
the percentage of people who have adequate knowledge of probability among those who have a sufficient
background of information theory?
Solution Let E1 be the event that the engineers have sufficient knowledge in information theory and E2
be the event that the engineers have adequate knowledge of probability.
Given:
P( E1 ) = 0.6 and P(E2 ) = 0.5
P(E1 » E2 ) = 0.8
Also, we can write P(E1) = 0.4 and P(E2) = 0.5
P(E1 « E2) = 1 – P(E1 » E2 )
= 1 – 0.8 = 0.2
P( E1 « E2 ) 0.2
P(E1 | E2) = = = 0.4
P ( E2 ) 0.5
1.72 A bag contains 6 white, 5 red and 7 black balls. If four balls are drawn one by one without
replacement, find the probability of getting all white balls.
Solution Let A, B, C and D denote events of getting a white ball in the first, second, third and fourth draws
respectively. The required probability is
P(A « B « C « D)
Using the multiplication theorem, we have
P(A « B « C « D) = P(A) P(B | A) P(C | A « B) P(D | A « B « C)
6 1
= =
18 3
P(B | A) = P (drawing a white ball in the second draw given that a white ball is drawn
in the first draw)
5
=
17
1.44 Probability Theory and Random Processes
Since the ball is not replaced, the probability of drawing a white ball during the third draw given that two
white balls are drawn during the first two draws is
4 1
P(C | A « B) = =
16 4
Similarly,
3 1
P(D | A « B « C) = =
15 5
P(A « B « C « D) = P(A) P(B | A) P(C | A « B) P(D | A « B « C)
1Ê 5ˆ Ê 1ˆ Ê 1ˆ 1
=
3 ÁË 17 ˜¯ ÁË 4 ˜¯ ÁË 5 ˜¯ = 204
Practice Problems
1.23 Four cards are drawn successively, without replacement from a pack of 52 well-shuffled cards. What is the
probability that the first two cards are queens, the and third card drawn is a king and the fourth one is an ace?
1.24 A box contains eight red balls, seven white balls, and five blue balls. If three balls are drawn successively from
the box, find the probability that they are drawn in the order red, white and blue if each ball is replaced after it has been
drawn. (Ans. 7/20)
Solved Problems
1.73 A, B and C toss a coin. The one who gets the head first wins. What are their respective
chances?
1
Solution The probability of A winning in the first round is . If all of them failed in the first round, the
3 2 6
Ê 1ˆ 1 Ê 1ˆ 1
probability of A winning in the second round, is Á ˜ . Similarly, A winning in the third round is Á ˜ .
Ë 2¯ 2 Ë 2¯ 2
So the required probability using addition theorem is
1
3 6
1 Ê 1ˆ 1 Ê 1ˆ 1 2 4
+Á ˜ +Á ˜ + =
2 Ë ¯
2 2 Ë ¯
2 2 Ê 1ˆ
3 7
1- Á ˜
Ë 2¯
Ê 1ˆ Ê 1ˆ 1
The probability of B winning in the first round is equal to Á ˜ Á ˜ =
Ë 2¯ Ë 2¯ 4
3
Ê 1ˆ 1
Similarly, B wins in the second round with probability Á ˜ , and so on.
Ë 2¯ 4
The probability of B winning is
1
2 6
1 1 Ê 1ˆ 1 Ê 1ˆ 4 2
+ + Á ˜ + = =
4 4 ÁË 2 ˜¯ 4 Ë 2¯ Ê 1ˆ
3 7
1- Á ˜
Ë 2¯
Basic Concepts in Probability 1.45
Solved Problems
1.74 If P(A) = 0.75, P(B) = 0.4 and P(A « B) = 0.3, are the events A and B independent?
1
1.75 If A and B are independent events of a random experiment such that P(A « B) = and
1 5
P( A « B ) = then find P(A).
4
1 1
Solution Given P(A « B) = and P( A « B ) =
5 4
Since A and B are independent,
1
P(A « B) = P(A) P(B) =
5
1.46 Probability Theory and Random Processes
1
and P ( A) P ( B ) =
4
Let P(A) = x and P(B) = y
Then
1
xy =
5
1
P( A)P( B ) = [1 – P(A)] [1 – P(B)] = (1 – x) (1 – y) =
4
1
1 – x – y + xy =
4
1 1 1
1–x–y = - =
4 5 20
19
x+y = = 0.95
20
2
Ê 19 ˆ 4
(x – y)2 = (x + y)2 – 4xy = Á ˜ -
Ë 20 ¯ 5
x – y = 0.32
\ x = 0.635
y = 0.315
fi P(A) = 0.635
3
1.76 If A and B are independent events such that P(B) = P( A » B ) = 0.75 then find P(A).
5
(c) A« B = A» B
P( A « B ) = P( A » B) = 1 – P(A » B)
= 1 – {P(A) + P(B) – P(A « B)}
= 1 – P(A) – P(B) + P(A) P(B)
= [1 – P(B)] – P(A) [1 – P(B)]
= P ( A) P ( B )
Therefore, A and B are also independent.
1.78 A bag contains 4 white, 6 red and 8 black balls. Four balls are drawn one by one with replacement.
What is the probability that at least one is white?
Solution Let Ai be the event that the ball drawn in the ith draw is white, 1 £ i £ 4. Since the balls are drawn
with replacement, the events A1, A2, A3 and A4 are independent.
4 2
P(A1) = P(A2) = P(A3) = P(A4) = =
18 9
The probability that at least one is white is
P(A1 » A2 » A3 » A4) = 1 – P( A) P( A2 ) P( A3 ) P( A4 )
1.48 Probability Theory and Random Processes
2 7
P( A) = 1 – P(A1) = 1 -
=
9 9
7
Similarly, P( A2 ) = P( A3 ) = P( A4 ) =
9
4
Ê 7ˆ
P(A1 » A2 » A3 » A4) = 1 - Á ˜
Ë 9¯
1.79 What is the probability that the series circuit in Fig. 1.14 with three switches S1, S2, S3 with
1 1 2
probabilities , , respectively of functioning will work?
2 4 3
Fig. 1.14
Practice Problems
1.25 A dice is thrown thrice. Find the probability of getting an odd number at least once. (Ans. 7/8)
5 1 1
1.26 Given P ( A » B) = , P ( A « B) and P ( B ) = . Test whether A and B are independent or not (independent).
6 3 2
Solved Problems
1.80 The probabilities that a husband and a wife will be alive 25 years from now are 0.80 and 0.85
respectively. Find the probability that in 25 years (a) both will be alive, (b) neither will be alive, (c) at least
one will be alive, and (d) only one will be alive.
Assume that the husband and the wife will be alive independently.
Solution Let A be the event that the husband is alive and B be the event that the wife is alive.
P(A) = 0.80, P(B) = 0.85
Basic Concepts in Probability 1.49
Solution Let Ei be the event of drawing a white ball in the ith drawn when 1 £ i £ 3. Since the ball is
6 1
replaced after every draw, all events Ei, 1 £ i £ 3 are independent. Also, P(Ei) = = .
18 3
The required probability
P(E1 » E2 » E3) = 1 – P( E1 ) P( E2 ) P( E3 )
3
Ê 2ˆ
= 1- Á ˜
Ë 3¯
1.82 A and B alternately throw a pair of dice. A wins if he throws 6 before B throws 7, and B win if he
30
throws 7 before A throws 6. If A begins, show that his chance of winning is .
61
5 È 31 ¥ 5 Ê 31 ¥ 5 ˆ
2 ˘
= Í1 + +Á ˜ + ˙
36 ÎÍ 31 ¥ 6 Ë 36 ¥ 6 ¯ ˚˙
5 È 1 ˘ 5 È 36 ¥ 6 ˘ 30
= = =
36 Í 31 ¥ 5 ˙ 36 ÎÍ 36 ¥ 6 - 31 ¥ 5 ˚˙ 61
Í1 - ˙
ÍÎ 36 ¥ 6 ˙˚
1.83 A pair of fair dice is rolled together till a sum of either 4 or 6 is obtained. Find the probability that
6 comes before 4.
Solution Let A denote the event that a sum of 6 occurs, B be the event that the sum of 4 occurs, and C be
the event that neither a sum of 4 nor a sum of 6 occurs. We know n(S) = 36.
The possible ways of getting 4 are (1, 3), (3, 1) and (2, 2).
3 1
Therefore, P(A) = =
36 12
The possible ways of getting 6 are
(5, 1) (1, 5), (2, 4), (4, 2), (3, 3)
5
P(B) =
36
Ê 1 5ˆ
P(C) = 1 - [( P( A) + P( B)] = 1 - Á + ˜
Ë 12 36 ¯
28 7
= =
36 9
The probability that 6 comes before 4
= P(B) + P(C) P(B) + P(C) P(C) P(B) +
= P(B)[1 + P(C) + P2(C) + ]
P ( B) 5/36 5
= = =
1 - P(C ) 7 8
1-
9
Proof Let S be the sample space E1, E2, …, En are N mutually exclusive and exhaustive events.
These events satisfy
Ei « Ej = f, i π j = 1, 2, …, N
Also, the sample space
S = E1 » E2 » E3 » … » EN
Basic Concepts in Probability 1.51
The probabilities, P(Ei) and P(Ei | A) are known as priori and posteriori probabilities respectively.
Solved Problems
1.84 A bag contains 5 red and 4 black balls. A second bag contains 3 red and 5 balls. One bag is selected
at random. From the selected bag, one ball is drawn. Find the probability that ball drawn is black.
1.85 In a class, 60% of the students are boys and the remaining are girls. It is known that the probability
of a boy getting distinction is 0.30 and that of girl getting distinction is 0.35. Find the probability that a
student chosen at random will get distinction.
Solution Let E1 and E2 represent the events of choosing a boy and girl respectively, and A be the event
of getting distinction by a student.
P(E1) = 0.6, P(E2) = 0.4
P(A | E1) = 0.30, P(A | E2) = 0.35
P(A) = P(E1) P(A | E1) + P(E2) P(B | E2)
= (0.6) (0.3) + 0.4) (0.35) = 0.32
1.86 A letter is known to have come from either TATANAGAR or CALCUTTA. On the envelope, two
consecutive letters TA are visible. Find the probability that the letter came from CALCUTTA.
Solution Let E1 be the event that the letter come from TATANAGAR and E2 be the event that the letter
1
come from CALCUTTA. Then P(E1) = P(E2) = .
2
Let A denote the event that the two consecutive alphabets visible on the envelope are TA.
Now we take TA as a single letter and find the probability of Event A given the Event E1.
2
P(A | E1) =
8
1 2 3 4 5 6 7 8
(1)
TA T A N A G A R
Basic Concepts in Probability 1.53
1 2 3 4 5 6 7 8
(2)
T A TA N A G A R
1 1 2 3 4 5 6 7
Similarly, P(A | E2) = (3)
7 C A L C U T TA
Probability that the letter has come from Calcutta can be obtained using Bayes’ theorem
P ( E2 ) P ( A | E2 )
P(E2 | A) =
P( E1 ) P( A | E1 ) + P( E2 ) P( A | E2 )
1 Ê 1ˆ
Á ˜
2 Ë 7¯ 1/14 4
= = =
1 Ê 2 ˆ 1 Ê 1 ˆ 1 1 11
Á ˜+ Á ˜ +
2 Ë 8 ¯ 2 Ë 7 ¯ 8 14
Practice Problem
1.27 A letter is known to have come either from LONDON or CLIFTON. On the postmark, only the two consecutive
letters ‘ON’ are legible. What is the chance that it has come from LONDON? (Ans. 12/17)
Solved Problems
1.87 In a bolt factory, machines A, B and C manufacture 25, 35 and 40% of the total bolts respectively.
Of their outputs, 5, 4 and 2 percent respectively are defective bolts. A bolt is drawn at random and found to
be defective. What is the probability that the bolt came from machines A, B and C?
Solution Let E1 be the bolts manufactured by Machine A, E2 be the bolts manufactured by Machine B,
and E3 be the bolts manufactured by Machine C.
25 1 35 7
Then P(E1) = = ; P ( E2 ) = =
100 4 100 20
40 2
P(E3) = =
100 5
Let A be the event of drawing a defective bolt.
Then P(A | E1) = Probability that the bolt drawn is defective given the condition that it is manufactured by
5
Machine A = = 0.05
100
Similarly,
4
P(A | E2) = = 0.04
100
2
and P(A | E3) = = 0.02
100
A bolt is drawn randomly. The probability that the bolt came from Machine A can be obtained using
Bayes’ theorem:
1.54 Probability Theory and Random Processes
P( E1 ) P( A | E1 )
P(E1 | A) =
P( E1 ) P( A | E1 ) + P( E2 ) P( A | E2 ) + P( E3 ) P( A | E3 )
1
(0.05)
4 0.0125
= = = 0.4578
1 7 0.0273
Similarly, (0.05) + (0.04) + (0.04) (0.02)
4 20
P ( E2 ) P ( A | E2 )
P(E2 | A) =
P( E1 ) P( A | E1 ) + P( E2 ) P( A | E2 ) + P( E3 ) ( A | E3 )
7
(0.04)
20 0.014
= = = 0.5128
1 7 0.0273
(0.05) + (0.04) + (0.04) (0.02)
4 20
P( E3 ) P( A | E3 )
P(E3 | A) =
P( E1 ) P( A | E1 ) + P( E2 ) P( A | E2 ) + P( E3 ) ( A | E3 )
(0.04) (0.02) 0.0008
= = = 0.0293
1 7 0.0273
(0.05) + (0.04) + (0.04) (0.02)
4 20
1.88 A binary computer communication channel has the following error probabilities:
P(R1 | S0) = 0.2, P(R0 | S1) = 0.06
where
S0 = {‘0’ sent} S1 = {‘1’ sent}
R0 = {’0’ received} R1 = {‘1’ received}
Suppose that 0 is sent with a probability of 0.8, find
(a) The probability that ‘1’ is received
(b) The probability that ‘1’ was sent given that 1 is received
(c) The probability that ‘0’ was sent given that ‘0’ is received
Solution
(a) Given P(R1 | S0) = 0.2; P(R0 | S1) = 0.06; P(S0) = 0.8
P(R0 | S0) = P( R1 | S0 ) = 1 - P ( R1 | S0 ) = 1 - 0.2 = 0.80
P ( R0 | S0 ) P(S0 )
(c) P(S0 | R0) =
P( R0 | S0 ) P(S0 ) + P( R0 | S1 ) P(S1 )
1.89 A pack contains 4 white and 2 green pencils, another contains 3 white and 5 green pencils. If one
pencil is drawn from each pack, find the probability that
(a) Both are white, and
(b) One is white and another is green.
Solution Let the event of selecting the Pack 1 be P1 and selecting the Pack 2 is P2.
Given that Pack 1 contains 4 white (W) and two green (G) pencils and Pack 2 contains 3 white (W) and
five green (G) pencils.
(a) P (selecting both white) = P(selecting W from Pack 1) ¥ P(selecting W from Pack 2)
4 3
¥ = 0.25
=
6 8
(b) P(selecting W and G) = P(selecting W from Pack 1) ¥ P(selecting G from Pack 2) + (selecting W
from Pack 2) ¥ P(selecting G from Pack 1)
4 5 2 3
= ¥ + ¥ = 0.542
6 8 6 8
1.90 What is the probability of drawing 3 white and 4 green balls from a bag that contains 5 white and
6 green balls, if 7 balls are drawn simultaneously at random?
Solution Let W be the event of drawing a white ball and G be the event of drawing green ball. The bag
contains 5W and 6G balls. The 7 balls are drawn at random.
Ê 11ˆ Ê 5ˆ
Out of total of 11 balls, a sample of 7 balls can be chosen in Á ˜ ways. Again, 3 balls can be drawn Á ˜
Ë7 ¯ Ë 3¯
Ê 6ˆ
ways and 4 green balls can be drawn from six green balls Á ˜ ways.
Ë 4¯
Ê 5ˆ Ê 6 ˆ
ÁË 3˜¯ ÁË 4˜¯ 10(15) 5
The required probability is = =
Ê 11ˆ 330 11
ÁË 7 ˜¯
1.91 An integer is chosen at random from the first 100 positive integers. What is the probability that the
integer chosen is divisible by 6 or 8?
Solution Let A be the event of the number being divisible by 6 and B be the event of a number being
divisible by 8.
1.56 Probability Theory and Random Processes
Between 1 and 100, the total number of integers divisible by 6 is 16. That is n(A) = 16
Similarly, n(B) = 12
Since 24, 48, 72 and 96 are divisible by both 6 and 8, therefore, n(A « B) = 4
n(A » B) = n(A) + n(B) – n(A « B)
= 16 + 12 – 4 = 24
n( A » B) 24
P(A » B) = = = 0.24
Total integers 100
1.92 From a pack of 52 cards, 1 card is drawn at random. Find the probability of getting a queen.
Ê 4ˆ
A queen can be chosen in Á ˜ ways.
Ë 1¯
Ê 52ˆ
The number of ways that a card can be selected from 52 cards is Á ˜ = 52
Ë 1¯
Ê 4ˆ
ÁË 1˜¯ 4 1
P(getting a queen) = = =
Ê 52ˆ 52 13
ÁË 1 ˜¯
1.93 What is the probability that out of 6 cards, taken from a full pack, 3 will be black and 3 will be
red?
Ê 52ˆ
Solution A full pack contains 52 cards and six cards can be selected from the pack in Á ˜ ways.
Ë 6¯
Ê 26ˆ
The number of black cards are 26 and three cards can be selected in Á ˜ ways. Similarly, the three
Ë 3¯
Ê 26ˆ
red cards can be selected in Á ˜ ways. The total number of ways of choosing 3 black and 3 red cards =
Ë 3¯
Ê 26ˆ Ê 26ˆ
ÁË 3 ˜¯ ÁË 3 ˜¯ .
Ê 26ˆ Ê 26ˆ
ÁË 3 ˜¯ ÁË 3 ˜¯
The probability =
Ê 52ˆ
ÁË 6 ˜¯
1.94 A coin is tossed three times. Find the probability that a head shows up at least once.
Since the coin is tossed three items, there are 23 = 8 possible outcomes.
P(A) = 1 – P(all tails)
1 7
1- =
8 8
1.95 From a pack of 52 cards, Event ‘A’ is defined as drawing a king card, Event B defined as drawing a
jack or queen card, Event C is defined as drawing a heart card. Then find out which of them are statistically
independent and dependent events.
Solution Given that A be an event of drawing a king card, B be an event of drawing a jack or queen card,
and C be an event of drawing a heart card from a deck of 52 playing cards.
P (A) = = ; P (B ) =
4 1 8 2 13
= ; P(C ) =
52 13 52 13 52
4 8 2
(a) Now P( A)P( B) = ¥ = = 0.0118
52 52 169
But P(A « B) = 0, since A and B are disjoint
\ P(A « B) π P(A)P(B)
A and B are not statistically independent.
2 13 1
(b) Now P( B)P(C ) = ¥ = = 0.0384
13 52 26
2 1
and P( B « C ) = = , since there is one jack of hearts card and one queen of hearts card.
52 26
\ P(B « C) = P(B)P(C)
So B and C are statistically independent.
1 13 1
(c) P( A)P(C ) = ¥ = = 0.01923
13 52 52
1
and P( A « C ) = , since there is one king of hearts card,
52
\ P(A « C) = P(A)P(C)
So A and C are statistically independent.
1.96 Find the probability of 3 coins falling all heads of when tossed simultaneously.
Solution When three coins are tossed simultaneously, the total number of possible outcomes is 8. Out of
these eight outcomes, all heads occur only once. Therefore, required probability is
1
= = 0.125
8
1.97 Determine the probability of the card being either a red or a king when one card is drawn from a
regular deck of 52 cards.
1.58 Probability Theory and Random Processes
Solution Let A be the event of drawing a red card and B be the event of drawing a king card.
Since there are 26 red cards,
26 1
P(A) = = = 0.5
52 2
Since there are 4 king cards,
4 1
P(B) = =
52 13
There are two red king cards (one is king of diamonds and the other is king of hearts)
2 1
\ P(A « B) = = = 0.03846
52 26
\ P(either red or king) = P(A » B) = P(A) + P(B) – P(A « B)
1 1 2 28
= + - =
2 13 52 52
1.98 In a box, there are 100 resistors having resistances and tolerances as shown in the table. Let a
resistor be selected from the box and assume each resistor has the same likelihood of being chosen. For the
three events: A as “draw a 47 W resistor”, B as “draw a resistor with 5% tolerance,” and C as “draw a 100
W resistor”, calculate the P(A « B), P(A « C), P(B « C).
Number of resistors in a box having resistance and tolerance.
Resistance Tolerance
5% 10% Total
22 10 14 24
47 28 16 44
100 24 8 32
Total 62 38 100
1.99 The coefficients a, b and c of a quadratic equation ax2 + bx + c = 0 are determined by throwing a
dice three times. Find the probability that (a) the roots are real, and (b) the roots are complex.
b2
Solution The coefficients can be any value from 1 to 6. For roots to be real, the relation ac £ must
be satisfied. 4
If b = 1, no value of a and c satisfies the inequality.
If b = 2 then ac £ 1. Then only a = 1 and c = 1 satisfies the inequality.
9
If b = 3 then ac £ . In this case, (a, c) values can be (1, 1), (1, 2) , (2, 1): a total of 3 ways.
4
If b = 4 then ac £ 4. Then (a, c) values can be (1, 1), (1, 2), (1, 3), (1, 4), (2, 1), (2, 2), (3, 1), (4, 1): a total
of 8 ways.
25
If b = 5 then ac £ , then (a, c) values can be (1, 1), 1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1), (2, 2),
4
(2, 3), (3, 1), (3, 2), (4, 1), (5, 1), (6, 1): a total of 14 ways.
If b = 6 then ac £ 9. For this case, (a, c) values can be (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1),
(2, 2), (2, 3), (2, 4), (3, 1), (3, 2), (3, 3), (4, 1) (4, 2), (5, 1), (6, 1): a total of 17 ways.
B 2 3 4 5 6
Number of ways 1 3 8 14 17 43 total
3
The total number of ways 43. n(s) = 6 = 216
43
P(real roots) =
216
173
P(complex roots) =
216
1.100 What is the chance that a leap year selected at random will contain 53 Sundays?
n( A) 2
= =
n( S ) 7
1.101 Four dice are thrown simultaneously. What is the probability that the sum of numbers is exactly
20?
Solution There are six possible outcomes for a single throw of a dice. When four dice are thrown
simultaneously, the number of possible outcomes is 64. The following combinations produce the sum as
exactly 20.
(a) Three dice showing 6 and one dice showing 2, (6, 6, 6, 2), (2, 6, 6, 6), (6, 2, 6, 6), (6, 6, 2, 6): a total
of 4 outcomes
(b) Two dice showing 6, one showing 5 and the other 3, that is (6, 6, 5, 3)
4!
The possible outcomes are = 12
2!
The third possible way is two dice with 6 and two dice with 4. That is, (6, 6, 4, 4). The total number of
4!
possible outcomes is =6
2!2!
The fourth possible way is two dice with 5, one with six and the other with 4. That is, (5, 5, 6, 4). The total
4!
number of possible outcomes is = 12
2!
The last possible way is with all four dice with 5. That is, (5, 5, 5, 5). There is only one outcome.
4 + 12 + 6 + 12 + 1
64
35
Hence, the required probability is
1296
1.102 An urn contains 6 white and 9 black balls. If 4 are to be randomly selected without replacement,
what is the probability that the first 2 selected are white and the last 3 black?
Ê 15ˆ
Solution The total possible way of selecting 4 balls from 15 balls is Á ˜ .
Ë 4¯
Ê 6ˆ Ê 9ˆ
The two white balls can be selected in Á ˜ ways and two black balls can be selected in ÁË 2˜¯ ways.
Ë 2¯
Therefore, the required probability is
Ê 6ˆ Ê 9ˆ
ÁË 2˜¯ ÁË 2˜¯ 15 (36) 6
= =
Ê 15ˆ 15 ¥ 14 ¥ 13 ¥ 3 91
ÁË 4 ˜¯
Basic Concepts in Probability 1.61
Practice Problem
1.28 When two balls are drawn in succession with replacement from a box consisting of 5 white and 7 black balls find
the probability that (a) both are white, (b) both are black and (c) the first is white and the second is black.
Solved Problems
1.103 If two dice are thrown, what is the probability of obtaining a sum of at least 10?
6 1
\ the required probability is = =
36 6
1.104 When two dice are thrown, find the probability of getting the sums of 10 or 11.
Solution Let A be the event of getting a sum of 10 and B be the event of getting a sum of 11.
The total number of outcomes is 62 = 36
The possible outcomes are {(5, 5), (4, 6), (6, 4)}
3
Therefore, P(A) = = 0.0833
36
The possible outcomes for B is {(6, 5), (5, 6)}
2
P(B) = = 0.0556
36
Since the sets A and B are independent,
3 2 5
P(sum of 10 or 11) = + = = 0.1389
36 36 36
1.105 A shipment of components consists of three identical boxes. One box contains 2000 components
of which 25% are defective, the second box has 5000 components of which 20% are defective and the third
box contains 2000 components of which 600 are defective. A box is selected at random and a component
is removed at random from a box. What is the probability that this component is defective? What is the
probability that it came from the second box?
Solution Let the events B1, B2 and B3 are selecting the boxes.
The probability of selecting a box is
1
\ P(B1) = P( B2 ) = P( B3 ) =
3
1.62 Probability Theory and Random Processes
1.106 Three boxes of identical appearance contain two coins each. In one box both are gold, in the
second both are silver, and in the third box one is silver and the other is a gold coin. Suppose that a box is
selected at random and further that a coin in that box is selected at random. If this coin proves to be gold,
what is the probability that the other coin is also gold?
If another coin is also selected, the probability that this coin is also gold is
P(GG) = P(G/B1) P(G/GB1) P(B1) + P(G/B2) P(G/GB2) P(B2) + P(G/B3) P(G/GB3) P(B3)
1 1 1 1
= 1¥1¥ +0¥0¥ + ¥0¥
3 3 2 3
1 1
P(GG) = + 0 + 0 = = 0.33
3 3
1.107 In 20 items, 12 are defective and 8 are non-defective. If these items are chosen at random, what
is the probability that
(a) The first two items inspected are defective?
(b) The first two items inspected are non-defective?
(c) One is defective and the other is non-defective?
Solution
Number of defective items = 12
Number of non-defective items = 8
Total number of items = 20
Ê 20ˆ
Two items can be selected in Á ˜ ways
Ë 2¯
Ê 12ˆ
ÁË 2 ˜¯ 66 33
P(both inspected are defective) = = =
Ê ˆ20 190 95
ÁË 2 ˜¯
Ê 8ˆ
ÁË 2˜¯ 28 14
P(both inspected are non-defective) = = =
Ê 20ˆ 190 95
ÁË 2 ˜¯
Ê 8ˆ Ê 12ˆ
ÁË 1˜¯ ÁË 1 ˜¯ 96 48
P(one is defective and the other is non-defective) = = =
Ê 20ˆ 190 95
ÁË 2 ˜¯
1.108 An urn contains four red, three green, five blue and three white balls. What is the probability of
selecting a sample size of eight balls containing two red, two green, three blue and one white ball?
Solution
Total number of balls = 4 + 3 + 5 + 3 = 15
1.64 Probability Theory and Random Processes
Ê 15ˆ
The number of combinations for selecting 8 balls from 15 balls is Á ˜ .
Ë 8¯
Ê 4ˆ
The number of combinations for selecting two red balls from 4 balls is Á ˜ .
Ë 2¯
Ê 3ˆ
The number of combinations for selecting two green balls from three green balls is Á ˜ .
Ë 2¯
Ê 5ˆ
The number of combination for selecting three blue balls from five blue balls is Á ˜ .
Ë 3¯
Ê 3ˆ
The number of combinations for selecting one white ball from three white balls is Á ˜ .
Ë 1¯
Ê 4ˆ Ê 3ˆ Ê 5ˆ Ê 3ˆ
ÁË 2˜¯ ÁË 2˜¯ ÁË 3˜¯ ÁË 1˜¯ 6(3)(10)(3)
The required probability = = = 0.0839
Ê 15ˆ 6435
ÁË 8 ˜¯
1.109 The probability of Ramesh scoring 90% of marks in English, Hindi and Sanskrit are 0.2, 0.3 and
0.5 respectively. If the grades can be regarded as independent events, find the probability that he get 90%
in (a) all subjects, (b) in none of the subjects, and (c) exactly in two subjects
Solution Let E1, E2, and E3 are the events of Ramesh scoring 90% in English, Hindi and Sanskrit
respectively.
P(E1) = 0.2, P(E2) = 0.3; P(E3) = 0.5
(a) P(scoring 90% in all subjects = P(E1 « E2 « E3)
= P(E1P(E2)P(E3) (∵ since E1, E2 and E3 are independent)
= (0.2) (0.3) (0.5)
(b) P(scoring 90% in none of the subjects)
= P( E1 « E2 « E3 )
= P( E1 ) P( E2 ) P( E3 )
= (0.8 (0.7) (0.5) = 0.28
(c) P(scoring 90% in exactly two subjects)
= P( E1 « E2 « E3 ) + P( E1 « E2 « E3 ) + ( E1 « E2 « E3 )
= P( E1 ) P( E2 ) P( E3 ) + P( E1 ) P( E2 ) P( E3 ) + P( E1 ) P( E2 ) P ( E3 )
= (0.2)(0.3)(0.5) + (0.2)(0.7) (0.5) + (0.8)(0.3) (0.2)
= 0.148
Basic Concepts in Probability 1.65
1.110 A, B and C shot to hit a target. If A hits the target 3 times in 5 trials, B hits it 2 times in 3 trials, and
C hits 5 times in 8 trials, what is the probability that the target is hit by at least two persons?
Solution Let E1, E2 and E3 be the events that A, B and C hit the targets respectively.
3 2 5
P(E1) = ; P( E1 ) = ; P( E3 ) =
5 3 8
The target can be hit at least by two persons in the following ways.
(a) A and B hit the target and C does not hit
(b) A and C hit the target and B does not hit
(c) B and C hit the target and A does not hit
(d) A, B and C hit the target
The required probability P
= P( E1 « E2 « E3 ) + P( E1 « E2 « E3 ) + P( E1 « E2 « E3 ) + P( E1 « E2 « E3 )
Since all events are mutually exclusive,
P = P( E1 ) P( E2 ) P( E3 ) + P( E1 ) P( E2 ) P( E3 ) + P( E1 ) P( E2 ) P( E3 ) P( E1 ) P( E2 ) P( E3 )
Ê 3ˆ Ê 2 ˆ Ê 3ˆ 3 Ê 1ˆ Ê 5ˆ Ê 2 ˆ Ê 2 ˆ Ê 5ˆ Ê 3ˆ Ê 2 ˆ Ê 5ˆ
= Á ˜ Á ˜ Á ˜ + Á ˜ Á ˜ + Á ˜ Á ˜ Á ˜ + Á ˜ Á ˜ Á ˜ = 0.6916
Ë 5¯ Ë 3¯ Ë 8¯ 5 Ë 3¯ Ë 8¯ Ë 5¯ Ë 3¯ Ë 8¯ Ë 5¯ Ë 3¯ Ë 8¯
1.111 If one card is drawn at random from a pack of cards then show that getting an ace and getting a
heart are independent events.
Solution Let A be the event of getting an ace and B be the event of getting a heart.
Ê 4ˆ
ÁË 1 ˜¯ 4 1
P[A] = P[getting an ace] = = =
Ê ˆ 52 52 13
ÁË 1 ˜¯
Ê 13ˆ
ÁË 1 ˜¯ 13 1
P[B] = P[getting a heart] = = =
Ê 52ˆ 52 4
ÁË 1 ˜¯
Ê 1ˆ
ÁË 1˜¯ 1
P[A « B] = = which is equal to P(A) P(B).
Ê 52ˆ 52
ÁË 1 ˜¯
1.112 A technician has two electronic part cabinets with drawn arrangements as below:
3-drawer cabinets 4-drawer cabinets
pnp –transistor pnp transistor
npn-transistor pnp-transistor
npn-transistor npn-transistor
npn-transistor
The technician selects one cabinet at random and withdraws a transistor from one of the drawers. Assume
that each cabinet and each drawer within the selected cabinet is equally likely to be selected.
(a) What is the probability that a pnp-transistor is chosen?
(b) Given that an npn transistor is chosen, what is the probability that it comes from the 3-drawer
cabinet?
1 1 1
1.113 A problem is given to 3 students whose chances of solving it are , and respectively. What
is the probability that the 2 3 4
(a) Problem will be solved?
(b) Exactly two of them will solve the problem.
1
Solution Let S1, S2 and S3 be the events that the students A, B and C solve the problem. The P(S1) =
1 1 2
P(S2) = and P(S3) =
3 4
(a) The problem will be solved if at least one of them solves the problem. That is,
P(S1 » S2 » S3) = 1 - P(S1 « S2 « S3 )
Basic Concepts in Probability 1.67
1.114 India plays two matches each with the West Indies and Australia. In any match, the probabilities
of India getting 0, 1 and 2 points are 0.45, 0.05 and 0.5 respectively. Assuming that the outcomes are
independent, find the probability of India getting at least 7 points.
Solution Let India get two points if it wins, 1 point if the match ends in a tie and ‘0’ points if it loses. India
gets at least seven points when it wins all matches or it wins three match with one tie. Therefore,
P(India getting at least seven points) = Probability of winning all matches + Probability of winning three
matches and one tie
= P(W) P(W) P(W) P(W) + P(W) P(T) P(W) P(W) + P(W) P(W)
P(W) P(T) + P(T) P(W) P(W) P(W) + P(W) P(W) P(T) P(W)
= (0.5)4 + 4(0.05) × (0.5)3
= (0.5)3 [0.5 + 4 ¥ 0.05]
= (0.5)3 (0.7)
= 0.0875
Practice Problem
1.29 An unbiased dice is tossed until a number greater than 4 appears. What is the probability that an even number of
tosses is needed for this event? (Ans. 2/5)
Solved Problems
1.115 A boy is throwing stones at a target. The probability of hitting the target at any trial is 1/2. What
is the probability of hitting the target 5th time at the 10th throw.
1.68 Probability Theory and Random Processes
1
Solution Probability of hitting the target is
2
P(hitting the target 5th time at the 10th throw)
= P(hitting the target 4 times in 9 throws) ¥ P(hitting the target in 10th row)
4 5
Ê 9ˆ Ê 1 ˆ Ê 1 ˆ 1
= Á ˜Á ˜ Á ˜ .
Ë 4¯ Ë 2 ¯ Ë 2 ¯ 2
1.116 When we roll a pair of balanced dice, what are the probability of getting (a) 6, (b) 9, (c) 6, or 9,
(d) 4, (e) 2 or 12?
3 1
P(D) = =
36 12
(e) Let E be the event of getting 12,
E = {(6, 6)}
1
P(E) =
36
1 1 1
P(E) + P(D) = + =
12 36 9
1.117 Show that the chances of throwing six with 4, 3, or 2 dice respectively as 1 : 6 : 18.
Solution With four dice, the total number possible combinations in equal to 64.
The number of possible combinations to get six are {1113, 1122, 1311, 3111, 2211, 2112, 1131, 1221,
2121, 1212)
10
So the probability P1 =
64
With three dice, the total number of possible combinations is equal to 63
The number of possible combinations to get six are {114, 141, 411, 123, 132, 312, 213, 321, 231, 222}
10
The probability P2 =
63
The total number of possible combinations with two dice is 36.
With two dice, the possible combinations to get 6 are {15, 51, 24, 42, 33}
5
The probability P3 =
62
10 10 5
P1 : P2 : P3 = : :
6 4 63 6 2
= 10 : 60 : 180
= 1 : 6 : 18
1.118 The probability that Ramu passes mathematics is 0.6 and physics is 0.4 and in both is 0.2. What
is the probability that he passes in (a) at least in one subject? (b) neither of the subjects? (c) one of the
subjects? (d) not in mathematics?
Solution Let A be the event that Ramu passes mathematics, B be the event that Ramu passes physics:
P(A) = 0.6; P(B) = 0.4; P(A « B) = 0.2
(a) P(at least in one subjects)
= P(A » B) = P(A) + P(B) – (P(A « B)
= 0.6 + 0.4 – 0.2 = 0.8
1.70 Probability Theory and Random Processes
1.119 A power plant will shut down if systems S1 and S2 or S1 and S3 fail simultaneously. The systems
S1, S2 and S3 are independent and their probabilities of failure are 0.02, 0.015, 0.025 respectively.
(a) What is the probability that the plant will shut down?
(b) What is the probability that the plant stays on line given that S1 failed.
Solution The plant will shut down if S1 and S2 or S1 and S3 fail simultaneously. The following table shows
the cases when the plant will shut down:
F = Fail
W = Working
S1 S2 S3 Plant
F F F Shut down
F F W Shut down
F W F Shut down
1.120 Two switches S1 and S2 have respectively 95% and 90% chances
of working. Find the probability that in the circuit shown in Fig. 1.16
current will flow.
Fig. 1.16
Solution Consider the events S1 and S2 correspond to the working of switches S1 and S2 respectively.
Given P(S1) = 0.95 and P(S2) = 0.9.
The current flows in the circuit if both switches work together. Therefore, the required probability
P(S1 « S2)
Since S1 and S2 are independent events,
P(S1 « S2) = P(S1) P(S2) = (0.95) (0.9) = 0.855
1.121 An electric system has four switches arranged as shown in Fig. 1.17. The switches operate
independently of one another with a probability p. Find the probability that the current flows through the
circuit.
Fig. 1.17
Solution Let P(S1) be the probability that a switch S1 will work given P(S1) = p.
The current flows through the circuit when both S1 and S2 are closed or when both S3 and S4 are closed.
So the required probability is
P[(S1 « S2) » (S3 « S4)] = P(S1 « S2) + P(S3 « S4) – P[(S1 « S2) « (S2 « S4)]
Since P(S1) are independent of each other,
P[S1 « S2) » (S2 « S4)] = P(S1) P(S2) + P(S3) P(S4) – P(S1) P(S2) P(S3) P(S4)
=P◊P+P◊P–P◊P◊P◊P
= 2P2 – P4
1.122 Consider the probability that each relay being closed in the circuit shown in Fig. 1.18 is p. If all
relays functional independently what is the probability that current flows from A to B?
Fig. 1.18
1.72 Probability Theory and Random Processes
Practice Problems
1.30 In Fig. 1.19 assume that the probability of each relay being closed is p. If all relays function independently, what
is the probability that current flows from A to B.
Fig. 1.19
(Ans. p + 3p2 – 4p3 – p4 + 3p5 – p6)
1.31 Each of two persons toss three fair coins. What is the probability they obtain the same number of heads?
(Ans. 5/16)
1.32 In successive rolls of a pair of fair dice, what is the probability of getting 2 sevens before 6 even numbers.
(Ans. 0.555)
Basic Concepts in Probability 1.73
P(A) = p; P( A) = 1 - p
n-k
P(B1) = p (1 - p)
k
1.123 A typist makes an error while typing a letter 0.25% of the time. What is the probability of one
error in 10 letters?
Solution Let A be the event of error that occurs exactly once out of 10 letters.
Ê 10ˆ
P(A) = Á ˜ (0.025)1 (1 - 0.025)10 -1
Ë 1¯
= 0.199
1.124 A test consists of 10 multiple-choice questions, with 4 choices. Among the choices, only one is
correct and only one can be chosen. A student selects the choices at random. What is the probability that
he has 1, 2, 3 correct answers?
1 1
Solution Probability of selecting one answer among four choices is equal to . That is p =
4 4
0 10 - 0
P(no correct answer) = Ê ˆ ÊÁ 1 ˆ˜ ÊÁ 1 - 1 ˆ˜
10
ÁË 0 ˜¯ Ë 4 ¯ Ë 4¯
= 0.0563
1 10 - 1
Ê 10ˆ Ê 1 ˆ Ê 1ˆ
P(1 correct answer) = Á ˜ Á ˜ Á 1 - ˜
Ë0 ¯ Ë 4¯ Ë 4¯
= 0.1877
1.74 Probability Theory and Random Processes
2 10 - 2
Ê 10ˆ Ê 1 ˆ Ê 1ˆ
P(2 correct answers) = Á ˜ Á ˜ Á 1 - ˜
Ë 2 ¯ Ë 4¯ Ë 4¯
= 0.2816
3 10 - 3
Ê 10ˆ Ê 1 ˆ Ê 1ˆ
P(3 correct answers) = Á ˜ Á ˜ Á 1 - ˜
Ë 3 ¯ Ë 4¯ Ë 4¯
= 0.25
REVIEW QUESTIONS
1. Define the following terms:
(i) Sample space (ii) event (iii) trial (iv) sample point
2. State the exioms of probability.
3. Discuss the following in brief:
(i) Permutation (ii) combination (iii) probability
4. Give examples for finite sample space and infinite sample space.
5. Define: (a) Continuous sample space (ii) Discrete sample space.
6. Define and explain the following with an examples:
(i) Equally likely events (ii) Exhaustive events (iii) Mutually exclusive events
7. When are two events said to be mutually exclusive? Explain with an example.
8. Distinguish between mutually exclusive events and independent evetns.
9. Give the classical definition of probability.
10. Give the axiomatic definition of probability.
11. If A and B are independent events, prove that the events A and B , A and B , and A and B are also
independent.
12. State and prove the addition theorem of probability.
13. Discuss joint and conditional probability.
14. Define conditional probability and mention its properties.
15. Explain multiplication law of probability.
16. Explain about total probability.
State and prove Bayes’ theorem of probability.
18. Define the terms ‘independent events’. State the conditions for independent of (i) any two events,
and (ii) any three events A1 B and C.
19. Define Bernoulli trial.
EXERCISES
Problems
1. How many words can be formed using all letters of the word PERMUTATION using each letter
exactly once? (Ans. 39916800)
2. There are three copies, each of 4 different books. In how many number of ways can they be arranged
Ê 12! ˆ
on a shelf? Á Ans : ˜
Ë (3!)4 ¯
Basic Concepts in Probability 1.75
3. From 5 consonant, 4 vowels and 3 capital letter, how many number of words beginning with a
capital letter and containing 3 consonants and 2 vowels can formed? (Ans. 816000)
4. How many number of diagonals can be drawn by joining the vertices of an octagon? (Ans. 20)
5. A group consists of 9 married couple. In how many ways can a mixed-doubles game be arranged if
no husband and wife play in the same game? (Ans. 1512)
6. Surya has 7 friends. In how many ways can be invite one or more of them to a party. (Ans. 127)
7. In how many ways can a committee of 5 members be selected from 6 men and 5 women consisting
of 3 men and 2 women? (Ans. 200)
8. In how many ways can we select a cricket team from 16 players in which 5 players can bowl? Each
cricket team must include 2 bowlers. (Ans. 550)
9. How many chards can be drawn through 21 points on a circle? (Ans. 210)
10. What is the probability that a leap year selected at random will contain 53 Saturday? (Ans. 2/7)
11. An integer is chosen from 3 to 16. What is the probability that it is prime? (Ans. 5/14)
12. Two dice are thrown together. Find the probability that the total is 9. (Ans. 1/9)
13. A number is selected at random from 10 to 40. Find the probability that it is divisible by 5.
14. A bag contains 6 red , 5 white, 4 black balls. What is the probability that two balls drawn are red and
black? (Ans. 8/35)
15. Prove that P( A) = 1 - P( A) .
1 1 1
16. Two events A and B are such that P( A) = P( A /B) = and P(B /A) = . Find P( A /B ).
5 3 3
17. If A and B are two independent events, show that P( A / B ) = 1 - P( A) P( B ) .
5 1 1
18. If P( A » B) = , P( A « B) = and P(B ) = , Show that A and B are independent.
6 3 2
1 1
19. Given that P(A) = P(B) = P(C) = P(A « B) = P(C « B) = 0 and P(A « C) = . Evaluate
P(A » B » C): 3 6
20. If the probability for A to fail in an examination is 0.2 and that of B is 0.4 then what is the probability
that either A or B fails? (Ans. 0.52)
21. From a pack of 52 cards, 1 card is drawn at random. Find the probability of getting a queen.
(Ans. 1/13)
22. If P(A) = 0.4; P(B) = 0.7 and P(A « B) = 03, Find P( A « B ) and P(A » B ).
23. A bag contains 12 red balls and 6 white balls. Two balls are drawn one by one without replacement.
What is the probability that 60th are white?
24. A box contains 100 tickets numbered 1, 2, … , 100. Two tickets are chosen at random. It is given
that the maximum number on the two chosen tickets is not more that 10. What is the probability that
the maximum number on them is 5? (Ans. 1/9)
25. Six boys and six girls sit in a row at random. Find the probability that
(i) six girls sit together (ii) the boys and girls sit alternately. (Ans. (i) 1/132 (ii) 1/142)
26. Ne ticket is selected at random from 100 tickets numbered 00, 01, 02, 98, 99. If X and Y devote sum
and product of the digits respectively then find P(X = even, Y = odd) (Ans. 2/19)
27. There are 10 stations between A and B. A train is to stop at three of these 10 stations. What is the
probability that two of these station are consecutive? (Ans. 7/15)
28. Two fair dice are rolled. What is the conditional probability the first are lands on 6 given that the sum
of the dice is 11?
1.76 Probability Theory and Random Processes
29. Five different objects are distributed among 3 persons at random. What is the probability that each
person receives at least one objects? (Ans. 50/81)
30. There are 5 letters and 5 addressed envelops. If the letters are put at random in the envelope, what is
the probability that at least one letter may be placed in a wrongly addressed envelope?
(Ans. 119/120)
31. In a class, 60% are boys and the rest are girls. 50% of boys and 35% of girls known how to solve a
problem. If a student is selected at random and given that the student can solve the problem, What
is the probability that the student is a girl? (Ans. 1/4)
32. Raju can hit a target in 2 out of 5 shots and Rithesh can hit the target in 3 out of 4 shots. What is the
probability that the target gets hits when both try? (Ans. 17/20)
33. Sita can solve a problem with probability of 3/4, and Gita can solve the problem with a probability
of 5/7. Find the probability that the problem will be solved by at least one of them. (Ans. 16/21)
34. Find the chances of throwing ten with four dice. (Ans. 5/81)
35. From a bag containing 4 white and 6 black balls, 2 balls are drawn at random. If the balls are drawn
one after other without replacement, find the probability that
(i) both balls are white
(ii) both balls are black (Ans. (i) 2/15, (ii) 13)
36. Two students attempt to write a program. Their chances of writing the program successfully are 1/8
and 1/12 and the chance of making a common error is 1/1001. Find the chance that the program is
correctly written. (Ans. 13/14)
37. The contents of urns I, II and III are as follows:
(i) 1 white, 2 black and 3 red balls
(ii) 2 white, 1 black and 1 red ball
(iii) 4 white, 5 black and 3 red balls
One urn is chosen at random and two balls are drawn. They happen to be white and red. What is the
probability that they came from urns I, II and III? (Ans. 0.2797, 0.466, 0.254)
38. A box contains 5 red and 4 white balls. Two balls are drawn successively from the box without
replacement and it is noted that the second one is white. What is the probability that the first is also
white? (Ans. 3/8)
39. A urn contains 10 white and 3 black balls. Another urn contains 3 white and 5 black balls. Two balls
are drawn at random from the first urn and placed in the second urn and then one ball is taken at
random from the latter. What is the probability that it is a white ball? (Ans. 59/130)
40. One integer is chosen at random from the numbers 1, 2, 3, …, 100. What is the probability that the
chosen number is divisible by
(i) 6 or 8 and
(ii) 6 or 8 or both? (Ans. (i) 1/5, (ii) 6/25)
1 4 2
41. Gita goes to office either by car, scooter or bus with probabilities being , , , respectively. The
7 7 7
2 4 1
probabilities that she reaches office late, if she takes a car, scooter, bus is , and respectively.
9 9 3
Given that she reached office in time, what is the probability that she traveled by a car?
42. The probability that a doctor A diagnoses a disease correctly is 0.6. The probability that a patient will
dice by his treatment after correct diagnosis is 0.4 and the probability of death after wrong diagnosis
is 0.7. A patient of Doctor A who had the disease died. What is the probability that his disease was
diagonised correctly? (Ans. 6/13)
Basic Concepts in Probability 1.77
43. In a village, a shop distributes only two newspapers A and B. 25% of the village population reads A
and 20% reads B, While 8% reads both A and B. It is known that 30% of those who read A and not
B look into advertisements and 40% of those who read B but not A look into advertisements while
50% of those who read both A and B look into advertisements. What percentage of the population
reads an advertisement? (Ans. 13.9%)
44. Two balls are drawn at random with replacement from a box containing 10 black and 8 red balls.
Find the probability that (i) both the balls are red, and (ii) the first balls is black and second is red.
(Ans. 16/81, 20/81)
45. The probability that Ravi passes physics is 2/3 and the probability that he passes both physics and
chemistry is 14/45. The probability that he passes at least one test is 4/5. What is the probability that
he passes chemistry? (Ans. 4/9)
46. In a single throw of two dice, what is the probability of obtaining a sum of at least 10? (Ans. 1/6)
47. A couple has two children. Find the probability that both the boys, if it is known that (i) one of the
children is a boy, and (ii) the older child is a boy? (Ans. 1/3;1/2)
48. A fair coin and an unbiased dice are tossed. Let A be the event that a head appears on the coin and
B be the event of 3 on the dice. Check whether A and B are independent events or not.
(Ans. Independent events)
49. Raju speaks the truth in 60% of cases and Rajesh in 90% of cases. What is the percentage of cases
in which they are likely to contradict each other? (Ans. 42%)
50. A man takes a step forward with probability of 0.4 and backwards with probability of 0.6. Find the
probability that at the end of eleven steps, he is one step away from the starting point. (Ans. 0.368)
Multiple-Choice Questions
1. P ( A « B) + P ( A « B ) =
(a) P(A) (b) P(A » B) (c) P(B) (d) P(A » B)
2. If A and B are any two events, the probability that exactly one of them occurs is
(a) P(A) + P(B) – 2P(A « B) (b) P(A) + P(B) – P(A » B)
(c) P( A « B ) + P( A « B) (d) P( A) + P( B )
3. If two events A and B are such that P( A) = 0.3, P(B) = 0.4 and P( A « B) = 0.5 then P( B /A « B )
=
1 1 1 1
(a) (b) (c) (d)
3 4 6 8
5. In two events P(A » B) = P(A) = 6/5; P(B) = 2/3 then A and B are
(a) independent (b) mutually exhaustive
(c) mutually exclusive (d) dependent
1 1 1
5. Let A and B be two events such that P( A » B) = ; P( A » B) = and P( A) = where A stands
6 4 4
for complement of the event A. The events A and B are
(a) mutually exclusive and independent (b) independent but not equally likely
(c) equally likely (d) mutually exclusive
1.78 Probability Theory and Random Processes
23. A man rolls a dice until he gets an even number. The probability that he gets 4 in the last throw is
1 1 1 1
(a) (b) (c) (d)
3 4 5 7
24. A couple has two children. The probability that both are girls if the eldest is a girl is
1 1 2 1
(a) (b) (c) (d)
4 3 3 2
25. Two fair dice are rolled. The conditional probability that at least one lands on 6 given that the dice
land on different numbers is
1 1 1 1
(a) (b) (c) (d)
2 4 3 5
26. A dice will be rolled 5 times. The probability that “3” will show up exactly twice is
(a) 0.16 (b) 0.2 (c) 0.32 (d) 0.52
27. A typist makes an error while typing a letter 0.3% of the time. The probability of no error in 10
letters is
(a) 0.82 (b) 0.97 (c) 0.58 (d) 0.65
28. A coin is tossed three times. The probability that head shows up at least once is
1 1 7 5
(a) (b) (c) (d)
8 2 8 8
29. A dice is rolled 4 times. The probability of 6 coming up at least once is
(a) 0.52 (b) 0.71 (c) 0.42 (d) 0.3
1.80 Probability Theory and Random Processes
30. A dice is rolled twice and the sum of the number appearing on them is observed to be 7. The
conditional probability that the number 2 has appeared at least once is
1 2 1 1
(a) (b) (c) (d)
4 3 3 6
INTRODUCTION 2.1
In the previous chapter, we studied the concept of probability space that completely describes the outcome
of a random experiment. We also observed that in many of the examples of random experiments, the result
was not a numerical quantity, but in descriptive form. For example, in experiment of tossing a coin, the
outcome is head or tail. Similarly, in classifying a manufactured item we used categories like ‘defective’ and
non-defective’. In some experiments, the description of outcomes are sufficient. However in many cases, it is
required to record the outcome of the experiment as a number. Even in the experiments where the outcome
is descriptive, we assign a number to each outcome of the experiment. For example, in the experiment of
tossing a coin, we can assign a number to each outcome. We can assign zero to tail and one to head. The set
of all such real numbers associated with an experiment is known as a random variable. In this chapter, we
will study about the concepts of random variable.
The set of all possible values of X is known as range space and denoted by RX. That is, the random variable
X maps the entire sample S to another sample space RX.
2.2 Probability Theory and Random Processes
Example 1 In a coin-tossing experiment, the sample space is heads and tails. Now consider a random
variable X(s) that maps the sample space to a real value of as follows:
X(heads) = 1; X(tails) = 0
S
s
Heads
Tails
o 1
x
Range Rx
TT 1
TH 2
HT
HH
Fig. 2.3 Sample space of two coin experiment mapping to real value
Example 3 Consider an experiment of testing the lifespan of a bulb. Let the lifespan be given in hours
denoted by X. Since the manufacturer cannot tell with certainty about the lifespan of a particular bulb, X takes
a value which is random. Note that X is a non-negative random variable that takes a value x ≥ 0.
After considering all the above examples, we can define the random variable as follows.
Definition A random variable is a function that assigns a real number for all the outcomes in the sample
space of a random experiment.
The random variable is denoted by uppercase letter X and the measurement value of the random variable
is denoted by a lowercase letter x.
REVIEW QUESTIONS
1. Define a random variable and give the concept of random variable.
2. Define and give the concept of random variable.
The Random Variable 2.3
X 0 1 2
1 1 1
P(X)
4 2 4
Once the probabilities associated with various outcomes in the range space RX have been determined, we
shall often ignore the original space S.
Solved Problems
2.1 The sample space for an experiment is S = {0, 2, 4, 6}. List all the possible values of the following
random variable
1
(a) X = 2s (b) X = 2s2 – 1 (c) X =
2s + 1
Solution Given: S = {0, 2, 4, 6}
(a) X = 2s = {0, 4, 8, 12}
(b) X = 2s2 – 1; X = {–1, 7, 31, 71}
1 Ï 1 1 1¸
(c) X= ; X = Ì1, , , ˝
2s + 1 Ó 5 9 13 ˛
2.4 Probability Theory and Random Processes
2.2 In an experiment of rolling a dice and flipping a coin, the random variable X is chosen such that
(a) A coin heads (H) outcome corresponds to positive values to X that are equal to the numbers that
show upon the dice and
(b) A coin tails (T) outcome corresponds to negative values of X that are equal in magnitude to twice
the number shown on the dice. Map the elements of random variable X into points on the real line
and explain.
Solution In an experiment of rolling a dice and flipping a coin, the sample space is
S = {(H,1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6), (T, 1), (T, 2), (T, 3), (T, 4), (T, 5), (T, 6)}
This sample space is mapped onto the real axis using a random variable X such that when the coin shows
heads, the random variable X takes the positive value that shows upon the dice. Therefore,
(H, 1) = 1; (H, 2) = 2; (H, 3) = 3; (H, 4) = 4; (H, 5) = 5 and (H, 6) = 6
Similarly, when the coin shows tails, the random variable takes a value equal to the negative of twice the
number the shows upon the dice. Therefore,
(T, 1) = –2; (T, 2) = –4; (T, 3) = –6; (T, 4) = –8; (T, 5) = –10 and (T, 6) = –12
The mapping is shown in Fig. 2.4.
2.3 In an experiment, the pointer on a wheel of chance is spun. The possible outcomes are the numbers
from 0 to 12 marked on the wheel. The sample space consists of the numbers in the set {0 < s £ 12}and if
the random variable is defined as X = X(s) = s2. Map, the elements of random variable on the real line and
explain.
12
11 1
10 2
9 3
8 4
7 5
6
25 49 144
0 20 40 60 80 100 120 140
Fig. 2.5 Mapping of a sample space of experiment given in solved problem 2.3
Example Consider rolling a dice. The sample space S = {1, 2, 3, 4, 5, 6}. Let Y denotes the random vari-
able that takes even number in the dice thrown. Then the probability that Y takes an even value is
3 1
P(Y = even) = =
6 2
2.2.3 Conditions for a Function to be a Random Variable
The random variable X is a function that maps the sample points in the sample space to real values on the real
axis. For a function to be a random variable, it has to satisfy two conditions:
(i) The set (X £ x) shall be an event for any real number x. The probability of this event P(X £ x) is equal
to the sum of the probabilities of all the elementary events corresponding to (X £ x).
(ii) The probability of the events (X = •) and P(X = –•) be zero. That is P(X = –•) = 0; P(X = •) = 0
REVIEW QUESTION
3. What are the conditions for a function to be a random variable?
2.6 Probability Theory and Random Processes
Practice Problems
2.1 The sample space for an experiment is S = (0, 1, 2, 3). List all possible values of the following random variables:
1
(a) X = 2s (b) X = s2 – 1 (c) X = (Ans. (a) X = {0, 2, 4, 6} (b) X = {–1, 0, 3, 8} (c) X = {1, 0.5, 0.33, 0.25})
1+ s
2.2 For the following experiments, determine the possible values of the random variable
(a) A weighing scale in which the display show only five digits. The random variable is the displayed weight.
(b) A C program with 1000 lines. The random variable is the number of lines with errors.
Solved Problems
2.4 Given that a random variable X has the following values, state if X is discrete, continuous or mixed:
(a) {–5 < x < 10} (b) {5, 6, 10 < x < 15, 19, 20} (c) {2, 4, 6, 8, 9, 10}.
Solution
(a) X = {–5 < x < 10} continuous random variable
(b) X = {5, 6, 10 < x < 15, 19, 20} mixed random variable
(c) X = {2, 4, 6, 8, 9, 10} discrete random variable.
Let X be a random variable and x be a number. Let the event be X taking a value less than or equal to x. Then
the Cumulative Distribution Function (CDF) of X is denoted by
FX(x) = P(X £ x) – • < x < • (2.1)
That is, FX(x) denotes the probability that the random variable X takes on a value that is less than or equal
to x.
It can be shown that the discrete random variable is one having a staircase-type CDF and a continuous
random variable is one having an absolutely continuous CDF.
The CDF of discrete random variable and continuous random variable are shown in Fig. 2.6 and Fig. 2.7
respectively.
The Random Variable 2.7
F x( X )
F x( X )
x x
Fig. 2.6 CDF of discrete random variable Fig. 2.7 CDF of continuous random variable
1 1 3 1 1 1 7
From the table, FX(2) = + = and FX (3) = + + =
4 8 8 4 8 2 8
\ FX(x1) < FX(x1) (2.4)
Now consider another probability distribution function shown in table 2.3
Table 2.3
X 1 2 3 4
1 1 1
P(X) 0
2 4 4
1 1 3 1 1 3
FX(2) = + = and FX(3) = + + 0 =
2 4 4 2 4 4
FX(2) = FX(3)
Therefore, FX(x1) £ FX(x2)
4. P(x1< X < x2) = FX(x2) – FX(x1) (2.5)
2.8 Probability Theory and Random Processes
N
or FX(x) = Â PX (k ) (2.10)
K£x
Solved Problems
2.5 The CDF of a random variable X is given by
FX(x) = 0 x<0
1 2
= x+ 0£ x£
3 3
2
=1 for x >
3
(a) Draw the graph of the CDF.
Ê 1ˆ
(b) Complete P Á X > ˜ .
Ë 4¯
Solution
Given: FX(x) = 0 x < 0
The Random Variable 2.9
1 2
= x+ 0£ x£
3 3
2
=1 for x >
3
The CDF is shown in Fig. 2.8.
F x( X )
1.0
2
–
3
1
–
3
1 2 1 x
– –
3 3
Fig. 2.8 The CDF of random variable X given in Solved Problem 2.5
1
The probability that X > is given by
4
È 1˘ È 1˘
P ÍX > ˙ = 1 - P ÍX £ ˙
Î 4˚ Î 4˚
Ê 1 1ˆ 5
= 1- Á + ˜ =
Ë 4 3 ¯ 12
Solution
ÏÔ1 - e - x /2 x ≥ 0
Given FX(x) = Ì
ÔÓ0 othewise
A distribution function satisfies the following properties:
(a) FX(–•) = 0 (b) FX(•) = 1 (c) 0 £ FX(x) £ 1
The distribution function is zero for x < 0. Therefore, it satisfies the condition FX(–•) = 0
Also, it satisfies FX(•) = 1 – e–• = 1 and 0 £ FX(x) £ 1
Therefore, the given function is a valid distribution function.
2.7 In the experiment of tossing a fair coin three times, the sample space consists of eight equally likely
sample points. S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}. If X is the random variable giving
the number of tails obtained, find (a) P(X = 2) (b) P(X < 2) (c) P(0 < X < 3).
2.10 Probability Theory and Random Processes
Solution
(a) The random variable X is associated with the number of tails showing. Therefore, for X = 2, the
1 1 1 3
outcomes are {HTT, THT, TTH} and the probability P(X = 2) = + + =
8 8 8 8
(b) Similarly, for (X < 2), the outcomes are {HHH, HHT, HTH, THH} and the probability
1 1 1 1 1
P(X < 2) = + + + =
8 8 8 8 2
(c) For 0 < X < 3, the outcomes are {HHT, HTH, HTT, THH, THT, TTH} and the probability
6 3
P(0 < X < 3) = = .
8 4
2.8 Consider the experiment of throwing two fair dice. Let X be the random variable indicating the sum
of numbers that appear. (a) Find the sample space. (b) Find P(X = 4). (c) P(2 < X £ 6) (d) P(X £ 5).
Solution
(a) Since the experiment is of throwing two fair dice, the sample space
Ï (1,1) (1,2) (1,3) (1, 4) (1,5) (1,6) ¸
Ô(2,1) (2,2) (2,3) (2, 4) (2,5) (2,6) ÔÔ
Ô
Ô (3,1) (3,2) (3,3) (3, 4) (3,5) (3,6) Ô
S= Ì ˝
Ô(4,1) (4,2) (4,3) (4, 4) (4,5) (4,6) Ô
Ô(5,1) (5,2) (5,3) (5, 4) (5,5) (5,6) Ô
Ô Ô
ÔÓ(6,1) (6,2) (6,3) (6, 4) (6,5) (6,6) Ô˛
and the random variable X can take any value from {2, 3, 4, 5, 6, …}
(b) The value X = 4 can occur in three ways. That is, (1, 3), (2, 2) and (3, 1).
3 1
Therefore, P(X = 4) = =
36 12
(c) From the sample space, we find that X takes values 3, 4, 5 and 6 in 14 ways.
14 7
Therefore, P(2 < X £ 6) = =
36 18
10 5
(d) From the sample space, we find that X £ 5 occur in 10 ways. Therefore, P(X £ 5) = =
36 8
2.9 Consider the experiment of tossing four fair coins. The random variable X is associated with the
number of tails showing. Compute and sketch the CDF of X.
1
—
16
0 1 2 3 4 X
Fig. 2.9
2.10 The random variable X has the discrete variable in the set {–1, –0.5, 0.7, 1.5, 3}. The corresponding
probabilities are assumed to be {0.1, 0.2, 0.1, 0.4, 0.2}. Plot its distribution function and state whether it is
a discrete or continuous distribution function.
Solution The discrete variable X and the corresponding probabilities P(X) are given in the table.
X –1 –0.5 0.7 1.5 3
P(X) 0.1 0.2 0.1 0.4 0.2
The distribution function is given by
FX(x) = P(X £ x); FX(–1) = P(X £ –1) = 0.1
FX(–0.5) = P(X £ –0.5) = P(X = –1) + P(X = –0.5)
= 0.1 + 0.2 = 0.3
FX(0.7) = P(X £ 0.7) = P(X = –1) + P(X = –0.5) + P(X = 0.7)
= 0.1 + 0.2 + 0.1 = 0.4
FX(1.5) = P(X £ 1.5) = 0.1 + 0.2 + 0.1 + 0.4 = 0.8
2.12 Probability Theory and Random Processes
0.8
0.6
0.4
0.2
Fig. 2.10
2.12 A fair coin is tossed three times and the faces showing up are observed. (a) Write the sample
description space (b) If X is the number of heads in each of the outcomes of this experiment, find the
probability function. (c) Sketch the CDF and pdf.
Solution The sample space S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
Ï1 1 1 1 1 1 1 1 ¸
The probability P = Ì , , , , , , , ˝
Ó8 8 8 8 8 8 8 8 ˛
If xi is the number of heads in each outcome then the probability corresponding to each outcome is shown
in the table.
xi 0 1 2 3
1 1+1+1 3 1+1+1 3 1
P(xi) = =
8 8 8 8 8 8
xi 0 1 2 3
1 3 3 1
P(xi)
8 8 8 8
1 4
FX(0) = P{X = 0} = ; FX(1) = P{X = 1}=
8 8
7
FX(2) = P{X = 2} =
; FX(3) = P{X = 3}= 1
8
The plot of CDF and pmf are shown in Fig. 2.11
1.0
Fx(X)
7
– PX(xi) 3
8 –
4 8
–
8
1
–
1 8
–
8
0 1 2 3 x x
(a) CDF (b) pdf
Fig. 2.11
2.14 Probability Theory and Random Processes
Solution
We have FX(x) = Â PX (k )
k£x
1
FX(0) = Â PX (k ) = PX(0) =
6
k£0
12 1 1
FX(1) = Â PX (k ) = PX(0) + PX(1) = + =
6 3 2
k £1
1 1 3
FX(2) = Â PX (k ) = PX(0) + PX(1) + PX(2) = + +
6 3 10
= 0.8
k£2
1 1 3 1
= + + + =1
6 3 10 5
The CDF is shown in Fig. 2.12.
Fx(X)
1
0.8
1
–
2
1
–
6
0
1 2 3 X
Fig. 2.12
The Random Variable 2.15
2.14 The pmf of the number of components N of a system that fails is defined by
ÏÊ 3ˆ
Ô (0.3)n (0.7)3 - n n = 0, 1, 2, 3
PN(n) = ÌÁË n˜¯
Ô 0 otherwise
Ó
(a) What is the CDF of N?
(b) What is the probability that fewer than two components of the system fail?
Solution
Ê 3ˆ
(a) PN(0) = Á ˜ (0.3)0 (0.7)3 = (0.7)3 = 0.343
Ë 0¯
Ê 3ˆ
PN(1) = Á ˜ (0.3)1 (0.7)2 = 3(0.3)(0.7)2 = 0.441
Ë 1¯
Ê 3ˆ
PN(2) = Á ˜ (0.3)2 (0.7)1 = 3(0.3)2(0.7) = 0.189
Ë 2¯
Ê 3ˆ
PN(3) = Á ˜ (0.3)3 (0.7)0 = (0.3)3 = 0.027
Ë 3¯
FN(0) = PN(0) = 0.343
FN(1) = PN(0) + PN(1) = 0.784
FN(2) = PN(0) + PN(1) + PN(2) = 0.973
FN(3) = PN(0) + PN(1) + PN(2) + PN(3) = 1
(b) The probability that fewer than two components of the system fail is
PN(k < 2) = PN(k = 0) + PN(k = 1)
= FN(1) = 0.784
Practice Problems
Ïx
Ô for x = 1, 2, 3, 4, and 5
2.6 P(X = x) = Ì 15
Ô0 otherwise
Ó
Ê 1 3ˆ
Find (i) P(X = 1 or 2) (ii)) P(2 £ x £ 4). ÁË Ans : (i) 5 (ii) 5 ˜¯
2.7 A discrete random variable X has the probability distribution given below:
Values 0 1 2 3 4 5 6 7
2 2
P(x) 0 a 2a 2a 3a a 2a 7a2 + a
Ê 1 81 ˆ
Find (i) a (ii) P(X < 6). ÁË Ans. (i) 10 ,(ii) 100 ˜¯
The area under the density function is unity. This property along with the property 1 can be used to
check whether the given function is a valid density function or not.
x
3. FX(x) = Ú fX ( u) du (2.17)
-•
The above equation states that the distribution function FX(x) is equal to the integral of the density
function up to the value of x.
f X( x )
F x( X )
p (a < x £ b )
a b
Fig. 2.14 The shaded area is equal to FX(b) - FX(a)
For a discrete random variable, the density function can be obtained by differentiating Eq. (2.9). The
differentiation of unit step is an impulse. Therefore, for a discrete random variable
N
fX(x) = Â P( xi ) d ( x - xi ) (2.19)
i =1
where d(x) is known as impulse function defined as
Ï1 for x = 0
d ( x) = Ì (2.20)
Ó0 for x π 0
The impulse function shifted by a is defined as
Ï1 for x = a
d ( x - a) = Ì (2.21)
Ó0 for x π a
Thus, the density function for a discrete random variable is the sum of shifted and scaled impulses.
2.18 Probability Theory and Random Processes
Solved Problems
Ï1 2 Ï1
Ô ( x - 1); | x | < 2 Ô 0£ x£5
(c) fX(x) = Ì 4 (d) fX(x) = Ì 5
ÔÓ 0 otherwise Ô0 otherwise
Ó
Solution To verify the validity of a pdf, we need to verify whether the function is non-negative and
normalized so that the area under the curve is equal to unity.
• •
(a) Ú f X ( x )dx = Úe
-x
dx = -e - x |•0 = 1
-• 0
• •
Let x2 = t
Ú
2
-x
f X ( x )dx = Ú xe dx 2xdx = dt
-• 0
xdx = dt
• • t
-x 2 1 1 1 2
fi Ú xe dx = Ú e - t dt = e - t =
0
20 2 0 2
•
Since Ú f X ( x )dx π 1 ; it is not a valid pdf
-•
Ï1 2
Ô [ x - 1]; | x | £ 2
(c) fX(x) = Ì 4
ÔÓ 0 otherwise
The above pdf takes negative value for x = 0. Hence, it is not a valid pdf
Ï1
Ô 0£ x£5
(d) fX(x) = Ì 5
Ô0 otherwise
Ó
The Random Variable 2.19
• 5
1 1
Ú f X ( x )dx = Ú 5 dx =
5
(5 - 0) = 1
-• 0
Also fX(x) is non-negative. Hence, it is a valid pdf.
Ú f X ( x )dx = 1
-•
• 1 2 1 2
x2 x2
Ú f X ( x )dx = Ú xdx + Ú (2 - x)dx =
2
+ 2 x |12 -
2
-• 0 1 0 1
1 Ê 4 1ˆ
=+ 2 - Á - ˜ =1
2 Ë 2 2¯
Hence, fX(x) is a valid density function.
2.20 Probability Theory and Random Processes
2.18 In a cost price shop, the amount of rice (in hundreds of kilos) that sells in a day is a random variable
with density
Ï Cx for 0 £ x < 3
fX(x) = ÔÌC (5 - x ) for 3 £ x £ 5
Ô 0 otherwise
Ó
(a) For what value of C is fX(x) a valid pdf?
(b) What is the probability that the number of kilos of rice that will be sold in a day is (i) more than
200 kilos (ii) between 150 and 400 kilos?
Solution
•
Ï Cx for 0 £ x < 3
Ô
Given: fX(x) = ÌC (5 - x ) for 3 £ x £ 5
Ô 0 otherwise
Ó
• 3 5
3 È 5˘
x2 x2
=C + C Í5 x |35 - ˙ =1
2 0 ÍÎ 2 3˙
˚
È9˘
= C Í ˙ + C[(10 - 8)] = 1
Î2˚
9C 2
= + 2C = 1 fi C=
2 13
5
(b)(i) P(X > 2) = Ú f X ( x)dx
2
3 5
2 2
= Ú 13 x dx + Ú 13 (5 - x) dx
2 3
3 5
2 x2 2 È x2 ˘
= + Í5 x - ˙
13 2 2 13 Î 2 ˚ 3
2 È5˘ 2 5 4 9
= + [10 - 8] = + =
13 ÍÎ 2 ˙˚ 13 13 13 13
3 4
2 2
(ii) P(1.5 < X < 4) = Ú 13 x dx + Ú 13 (5 - x) dx
1.5 3
The Random Variable 2.21
3 4
= 2 x
2
2 È x2 ˘
+ Í5 x - ˙
13 2 1.5 13 Î 2 ˚ 3
2 2 È 7˘
= [4.5 - 1.125] + ÍÎ5 - 2 ˙˚
13 13
= 0.75
2.19 Consider the probability density function fX(x) = ae–b|x| where x is a random variable whose
allowable values range from x = –• to •. Find (a) the CDF of FX(x), (b) the relationship between a and b,
(c) the probability that the outcome x lies between 1 and 2.
Solution
Given: fX(x) = a e–b|x|
That is fX(x) = aebx for x < 0
= ae–bx for x ≥ 0
(a) We know that the CDF is
x
FX(x) = Ú f X ( x ) dx
-•
For x < 0,
x
Ú ae
bx
FX(x) = dx
-•
x
a bx a bx
= e = e
b -• b
For x ≥ 0,
0 x
FX(x) = Ú ae
bx
dx + Ú a e - bx dx
-• 0
0 x
a bx a - bx
= e + e
b -• ( - b) 0
a a - bx a a a
= - (e - 1) = + - e - bx
b b b b b
a
= (2 - e - bx )
b
a bx
FX(x) = e for x < 0
b
a
= (2 - e - bx ) for x ≥ 0
b
2.22 Probability Theory and Random Processes
•
(b) We know Ú f X ( x )dx = 1
-•
• 0 •
Ú f X ( x )dx = Ú ae
bx
dx + Ú ae - bx dx
-• -• 0
0 •
a bx Ê aˆ
= e + Á - ˜ e - bx
b -•
Ë b¯ 0
a a 2a
= - (-1) =
b b b
2a a 1
That is, =1 fi = or b = 2a
b b 2
The relationship between a and b is given by
b = 2a
2 2 2
- bx a e - bx
(c) P(1 £ X £ 2) = Ú f X ( x )dx = Ú ae dx = ( - b)
1 1 1
a a -b
- ÈÎe -2b - e - b ˘˚ = ÈÎe - e ˘˚
-2 b
b b
1 È -b
Îe - e ˘˚
-2 b
=
2
Ú Ú A(3 x - x
2
f x ( x )dx = )dx
-• 0
Ï 2 2 2¸
Ô3 x x3 Ô È 8˘ È 10 ˘
= AÌ - ˝ = A Í6 - ˙ = A Í ˙
ÔÓ 2 0 3 0Ô Î 3˚ Î3˚
˛
The Random Variable 2.23
10 A 3
That is, = 1 from which A =
3 10
Ï3
Ô {3 x - x }, 0 < x < 2
2
fX(x) = Ì 10
Ô 0 otherwise
Ó
•
3 È2 ˘
(b) P(X > 1) = Ú f X ( x )dx =
10
Í Ú (3 x - x 2 ) dx ˙
ÍÎ 1 ˙˚
1
2
3 È 3x2 x3 ˘ 3 È3 1 ˘
= Í - ˙ = (4 - 1) - (8 - 1)˙
10 Î 2 3˚ 1 10 ÍÎ 2 3 ˚
3 È 9 7 ˘ 13
=
10 ÍÎ 2 - 3 ˙˚ = 20
Ú f X ( x ) dx = 1
-•
• 1
Ú kx (1 - x 3 )dx
2
Ú f X ( x )dx =
0
-•
È 3 1 1˘
x x6 ˙
= kÍ -
ÎÍ 3 0 6 0 ˙˚
È1 1 ˘ k
= kÍ - ˙=
Î3 6 ˚ 6
k
That is =1 fi k=6
6
For k = 6, fX(x) is a proper density function.
2.24 Probability Theory and Random Processes
Ï A for 1 £ x £ 4
fX(x) = Ì
Ó0 otherwise
• 4
We know Ú f X ( x ) dx = 1 fi Ú Adx = 1
-• 1
from which we can obtain 3A = 1
1
That is, A =
3
4
1 2
P(X > 2) = Ú f X ( x) dx = 3 (4 - 2) = 3
2
3
1 2
P(1 £ X £ 3) = Ú Adx = 3 (3 - 1) = 3
1
The Random Variable 2.25
2.23 For the following pdfs, find the constant k so that fX(x) satisfies the conditions of being a pdf of a
random variable X. Also calculate P(–1 < X < 1)
ÏÔk (1 - x 2 ) 0 < x < 1
(a) fX(x) = Ì
ÔÓ 0 otherwise
ÏÔk x 2 e -2 x 0< x<•
(b) fX(x) = Ì
ÔÓ 0 otherwise
Ïk ( x + 1) -1 < x < 3)
(c) fX(x) = Ì
Ó otherwise
Solution
(a) Given fX(x) = ÔÏk (1 - x 2 ) 0 < x < 1
Ì
ÔÓ 0 otherwise
• 1
We know Ú fX ( x) = 1 fi Ú k (1 - x
2
)dx = 1
-• 0
2
Ê
1 lim x 2 e -2 x = lim x = 0
x3 ˆ È 1˘
= k Í1 - ˙ = 1 xÆ• x Æ • e2 x
kÁx - ˜
Ë 3¯ Î 3˚
0 Use L’Hospital rule,
3 Similarly, lim xe -2 x = 0
fi k=
2 xƕ
• •
2 -2 x
(b) Ú f X ( x )dx = 1 fi Ú kx e dx = 1
-• 0
• • • •
2 -2 x - x 2 e -2 x xe -2 x e -2 x k
Ú kx e dx =
2 0
-
2 0
-
4 0
=
4
=1
0
fi k =4
1 1
P(–1 < X < 1) = Ú f X ( x )dx = Ú 4x
2 -2 x
e dx
-1 0
1
È - x 2 e -2 x xe -2 x e -2 x ˘
= 4Í - - ˙
Î 2 2 4 ˚ 0
ÈÊ 1 1 e -2 ˆ Ê 1 ˆ ˘
= 4 ÍÁ - e -2 - e -2 - ˜ - Á - ˜ ˙ = 0.3233
ÎË 2 2 4 ¯ Ë 4¯˚
• 3
3
Ê x2 ˆ È9 - 1 ˘
kÁ + x˜ = kÍ + 3 - (-1)˙
Ë 2 ¯ -1
Î 2 ˚
1
8k = 1 fi k =
8
1
Ú f X ( x )dx = 1
-•
• 4 4
3 2 3 x3
Ú f X ( x )dx = Ú 64 x dx = 64 3
-• 0 0
= 3 È 64 ˘ = 1
64 ÍÎ 3 ˙˚
Therefore, fX(x) is a legitimate pdf.
2.25 The density function fX(x) is defined as
Ïk
ÔÔ 2 0 < x £ 2
FX(X) = Ì
Ô1 2 < x £ 3
ÔÓ 4
Find k and then the distribution function FX(x).
Solution
Ïk
ÔÔ 2 0< x£2
Given: fX(x) = Ì
Ô1 2< x£3
ÔÓ 4
•
We know Ú f X ( x )dx = 1
-•
The Random Variable 2.27
2 3
k 1 k 1
Ú 2 dx + Ú 4 dx = 2 (2 - 0) + 4 (3 - 2) fX(x) 1
0 2
0.75
1 3
fi k + =1 \ k= 3 0.5
4 4 –
8 0.25
Ï3
Ô , 0< x£2
fX(x) = ÔÌ 8
X
X
0 1 2 3 4
Ô1 , 2 < x £ 3
ÔÓ 4 Fig. 2.16
The pdf is shown in Fig. 2.16. From the pdf, we can find that we have to find FX(x) for four ranges of x
given by
(a) x < 0 (b) 0 < x £ 2 (c) 2 < x £ 3 (d) x > 3
(a) Since fX(x) is zero for x < 0, FX(x) = 0
x
3 3
(b) For 0 < x £ 2 FX(x) = Ú 8 dx = 8 x
0
2 x
3 1 3 1 1
(c) For 2 £ x £ 3 FX(x) = Ú 8 dx + Ú 4 dx = 8 (2) + 4 ( x - 2) = 4 ( x + 1)
0 2
(d) For x > 3 FX(x) = 1
The sketch of FX(x) is shown in Fig. 2.17.
1.0
0.8
0.6
0.4
0.2
0 1 2 3
Fig. 2.17
1
fi Ú k (1 - 2 x )dx = 1
2
1
È 2 ˘ Ê 2ˆ
k Í x - x3 ˙ = 1 ; k Á1 - ˜ = 1
Î 3 ˚ 0
Ë 3¯
k=3
That is fX(x) = 3(1 – 2x2); 0 < x < 1
The CDF is given by
x
FX(x) = Ú f X ( x )dx
-•
x
Ú 3(1 - 2 x
= 2
)dx
0
x
Ê 2 3ˆ
= 3 ÁË x - x ˜¯
3 0
Ê 2 ˆ
= 3 Á x - x3 ˜
Ë 3 ¯
Since fX(x) = 0 for x £ 0, FX(x) = 0 for x < 0
FX(x) = 0 for x£0
Ï Ê 2 3ˆ
Ô3 Á x - x ˜¯ for 0 < x < 1
=Ì Ë 3
Ô 0 for x > 1
Ó
Practice Problems
Ê 1 1 2 7ˆ
ÁË Ans. (a) k = 5 (b) 2 , 9 , 10 ˜¯
2.9 A commulative distribution function of a random variable X is FX(x) = 1 – (1 + x)e–x, x > 0. Find the pdf of x.
[Ans. xe–x, x > 0]
2.10 The pdf of a continuous random variable X is fX(x) = ke–|x|. Find k and FX(x).
Ê 1 x 1 -x ˆ
ÁË Ans. k = 2 . FX ( x ) = 0.5e for x < 0; 2 (2 - e ) for x > 0˜¯
The Random Variable 2.29
1 1
2.12 ◊
Find the commulative distribution function FX(x) corresponding to the pdf fX(x) = -• < x < •
p 1 + x2
Ê 1 -1 1ˆ
ÁË Ans. FX ( x ) = p tan x + 2 ˜¯
2.13 The pdf of the samples of the amplifier of speech waveform is found to decay exponentially at a rate a, so the
following pdf is proposed.
fX(x) = C e–a|x| – • < x < •
Ê a -a v ˆ
Find the constant C, and then find the probability P(| ¥| < v). ÁË Ans. C = 2 , 1 - e ˜¯
Ê nˆ Ê nˆ n!
ÁË x ˜¯ is called a binomial coefficient and is defined as Á x ˜ = x ! (n - x )!
Ë ¯
The binomial density and distribution functions are also given by
n
Ê nˆ
fX(x) =  ÁË k ˜¯ pk (1 - p)n - k d ( x - k ) (2.25)
k =0
and
n
Ê nˆ
FX(x) =  ÁË k ˜¯ pk (1 - p)n - k u ( x - k ) (2.26)
k =0
fX(x)
0.3125 0.3125
0.15625 0.15625
0.03125 0.03125
0
1 2 3 4 5 x
FX(x)
1.0
0.8125
0.5
0.1875
1 2 3 4 5 x
Fig. 2.18 (a) Binomial density function (b) Binomial distribution function for N = 5 and p = 0.5
The Binomial density and distribution functions are shown in Fig. 2.18(a) and 2.18(b) respectively.
Applications
Binomial distribution can be applied to many problems of repeated independent trials with only two possible
outcomes in each trial. The applications include (i) many games of chance, (ii) signal detection in radar and
sonar systems, and (iii) inquality control to classify items as defective or non-defective.
REVIEW QUESTIONS
4. Define density function. Find the equation for binomial distribution function.
5. What is binomial density function? Find the equation for binomial distribution function.
The Random Variable 2.31
Solved Problems
2.27 During October, Chennai has rainfall on an average three days a week. Obtain the probability that
(a) rain will fall on at least 2 days of a given week, (b) first three days of a given week will be rainy and the
remaining 4 days will be wet.
Solution
3
(a) Given: p = ;n=7
7
Let P(X = x) denote the probability of rain falls
P(X ≥ 2) = P(X = 2) + P(X = 3) + P(X = 4) + P(X = 5) + P(X = 6) + P(X = 7)
2 5 3 4 4 3
Ê 7ˆ Ê 3 ˆ Ê 4 ˆ Ê 7ˆ Ê 3 ˆ Ê 4 ˆ Ê 7ˆ Ê 3 ˆ Ê 4 ˆ
= Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜
Ë 2¯ Ë 7 ¯ Ë 7 ¯ Ë 3¯ Ë 7 ¯ Ë 7 ¯ Ë 4¯ Ë 7 ¯ Ë 7 ¯
5 2 6 7 0
Ê 7ˆ Ê 3ˆ Ê 4ˆ Ê 7ˆ Ê 3ˆ Ê 4 ˆ Ê 7ˆ Ê 3ˆ Ê 4ˆ
+ Á ˜
Ë 5¯ ÁË 7 ˜¯ ÁË 7 ˜¯ + ÁË 6˜¯ ÁË 7 ˜¯ + ÁË 7 ˜¯ + ÁË 7˜¯ ÁË 7 ˜¯ ÁË 7 ˜¯
2.28 The random variable X has a binomial distribution with n = 8 and p = 0.5. Determine the following
probabilities. (a) P(X = 4) (b) P(X £ 2) (c) P(X ≥ 7) (d) P(3 £ X < 5).
Solution For a binomial random variable,
Ê nˆ
P(X = x) = Á ˜ p x q n - x x = 0, ..., n
Ë x¯
Given: n = 8 and p = 0.5, therefore, q = 0.5
Ê 8ˆ
(a) P(X = 4) = Á ˜ (0.5)4 (0.5)4
Ë 4¯
= 70(0.5)8 = 0.273
(b) P(X £ 2) = P(X = 1) + P(X = 2)
Ê 8ˆ Ê 8ˆ
= Á ˜ (0.5)1 (0.5)7 + Á ˜ (0.5)2 (0.5)6
Ë1¯ Ë 2¯
= 0.03125 + 0.1094 = 0.14063
(c) P(X ≥ 7) = P(X = 7) + P(X = 8)
Ê 8ˆ Ê 8ˆ
= Á ˜ (0.5)7 (0.5) + Á ˜ (0.5)8 (0.5)0
Ë 7¯ Ë 8¯
2.32 Probability Theory and Random Processes
= 0.0351
P(3 £ X < 5) = P(X = 3) + P(X = 4)
Ê 8ˆ Ê 8ˆ
= Á ˜ (0.5)3 (0.5)5 + Á ˜ (0.5)4 (0.5)4
Ë 3¯ Ë 4¯
= 0.21875 + 0.273
= 0.492
Solution
Ê nˆ
P(X = x) = Á ˜ p x q n - x x = 0, 1,2, ..., n
Ë x¯
1
Given n = 3 and p =
4
Ê 3ˆ
P(X = 0) = Á ˜ (0.25)0 (0.75)3 = 0.4219
Ë 0¯
Ê 3ˆ
P(X = 1) = Á ˜ (0.25) (0.75)2 = 0.4218
Ë1¯
Ê 3ˆ
P(X = 2) = Á ˜ (0.25)2 (0.75) = 0.1406
Ë 2¯
Ê 3ˆ
P(X = 3) = Á ˜ (0.25)3 (0.75)0 = 0.015625
Ë 3¯
FX(x) = 0 for x < 0
FX(x) = 0.4218 for 0£x<1
FX(x) = 0.8436 for 1£x<2
FX(x) = 0.9842 for 1£x<3
FX(x) = 1 for x≥3
2.30 A machine produces items in which 10% are defective. Every hour the mechanic draws a sample
of size 20 for inspection. If the sample contains no defective items, the operator does not stop the machine.
What is the probability that the machine will not be stopped?
Solution Let X be a random variable that represents the number of defective items in a sample of 20.
The probability that an item is defective is p = 0.1. X obeys binomial distribution.
Ê nˆ x n- x
P(X = x) = Á ˜ p (1 - p)
Ë x¯
The Random Variable 2.33
Ê 20ˆ
P(X = x) = Á ˜ (0.1) x (0.9)20 - x
Ë x¯
The probability that there is no defective item is
Ê 20ˆ
P(X = 0 = Á ˜ (0.1)0 (0.9)20
Ë 0¯
= (0.9)20 = 0.1216
2.31 Find the probability that in tossing a fair coin 5 times, there will appear (a) 3 heads (b) 3 heads and
2 tails, (c) at least 1 heads, (d) not more than 1 tails.
Solution The probability of getting heads/tails in tossing a coin is equal to 1/2. That is p = q = 1/2.
A coin is tossed 5 times.
1
That is, n = 5 and p = q =
2
For a binomial random variable,
Ê nˆ
P(X = x) = Á ˜ p x q n - x
Ë x¯
3 2
Ê 5ˆ Ê 1 ˆ Ê 1 ˆ 5
(a) P(getting 3 heads) = Á ˜ Á ˜ Á ˜ =
Ë 3¯ Ë 2 ¯ Ë 2 ¯ 16
3 2
Ê 5ˆ Ê 1 ˆ Ê 1 ˆ 5
(b) P(getting 3 heads and 2 tails) = Á ˜ Á ˜ Á ˜ =
Ë 3¯ Ë 2 ¯ Ë 2 ¯ 16
(c) P(at least 1 heads) = 1 – P(no heads)
0 5 5
Ê 5ˆ Ê 1 ˆ Ê 1 ˆ Ê 1ˆ
= 1- Á ˜ Á ˜ Á ˜ =1- Á ˜
Ë 0¯ Ë 2 ¯ Ë 2 ¯ Ë 2¯
31
=
32
2.32 In 256 sets of 12 tosses of fair coin, how many cases may one expect of 8 heads and 4 tails?
Solution
Given: n = 12; p = q = 1/2
P(getting 8 heads and 4 tails)
8 4
Ê 12ˆ Ê 1 ˆ Ê 1 ˆ
= Á ˜Á ˜ Á ˜
Ë8 ¯ Ë 2¯ Ë 2¯
12
Ê 12ˆ Ê 1ˆ
= Á ˜ ÁË 2 ˜¯ = 0.12
Ë 8¯
The number of cases one may expect of 8 heads and 4 tails is
(256)(0.12) = 30.72 31 times
2.33 A lot contains 2% defective items. What should be the number of items in a random sample so that
the probability of finding at least 1 defective item in it is at least 0.9?
Ê nˆ
P(X = 0) = Á ˜ ( p)0 (q )n - 0
Ë 0¯
fi (0.02)0 (0.98)n = 0.1
n log (0.98) = log (0.1)
fi n = 113.97
Therefore, n = 114
2.34 An irregular 6-faced dice is such that the probability that it gives 3 odd numbers in 7 throws is twice
the probability that it gives 4 odd numbers in 7 throws. How many sets of exactly 7 trials can be expected
to give no odd number out of 5000 sets?
Solution Let X be the event of getting odd numbers. Let p1 be the probability of getting 3 odd numbers
out of 7 throws and p2 be the probability of getting 4 odd numbers out of 7 throws. Given, p1 = 2p2.
fi p(X = 3) = 2p(X = 4)
Ê 7ˆ
p(X = 3) = Á ˜ ( p)3 (q )7 - 3
Ë 3¯
The Random Variable 2.35
Ê 7ˆ
p(X = 4) = Á ˜ p 4 (q )7 - 4
Ë 4¯
fi p3 q4 = 2p4q3
q = 2p
1 2
p+q =1fip= and q =
3 3
Ê 7ˆ
P(no odd number) = p(X = 0) = Á ˜ p0 (q )7 - 0
Ë 0¯
7
Ê 2ˆ
= Á ˜
Ë 3¯
Total number of sets to give no odd number
7
Ê 2ˆ
= 5000 Á ˜ = 292.64
Ë 3¯
fi 293 sets
2.35 A system is divided into n components, each of which functions independently with a probability
p. The system functions effectively if at least one-half of its components operate. For what values of p is a
5-component system more likely to function effectively than a 3-component system?
Solution Let the random variable X denote the number of components that function effectively. That is,
we have
Ê nˆ
P(X = x) = Á ˜ p x q n - x x = 0, 1, 2, ..., 5
Ë x¯
For a 5-component system, the system functions effectively if 3 or 4 or 5 components functions effectively.
Hence, we can write
P(5-component system can function)
= P(X = 3) + P(X = 4) + P(X = 5)
Ê 5ˆ 3 5 - 3 Ê 5ˆ 4 5 - 4 Ê 5ˆ 5 5 - 5
= Á ˜ p q +Á ˜ p q +Á ˜ p q
Ë 3¯ Ë 4¯ Ë 5¯
= 10 p3 q2 + 5p4q + p5
The 3-component system functions effectively, if 2 or 3 components function effectively.
P(3-component system can function)
= P(X = 2) + P(X = 3)
Ê 3ˆ Ê 3ˆ
= Á ˜ p 2 q 3 - 2 + Á ˜ p3 q 3 - 3
Ë 2¯ Ë 3¯
= 3p2q + p3
2.36 Probability Theory and Random Processes
Solution Let X be the event that represents the number of errors in a block.
Given: p = 0.05; q = 0.95
n = 16
P(X ≥ 4) = 1 – P(X < 4)
= 1 – {P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3)}
Ê nˆ
P(X = x) = Á ˜ ( p) x (q )n - x
Ë x¯
Ê 16ˆ
P(X = x) = Á ˜ (0.05) x (0.95)n - x
Ë x¯
ÏÊ 16ˆ Ê 16ˆ
P(X ≥ 4) = 1 – ÔÌÁ ˜ (0.05)0 (0.95)16 + Á ˜ (0.05) (0.95)15
ÔÓË 0 ¯ Ë 1¯
Ê 16ˆ Ê 16ˆ 13 ¸
Ô
+ Á ˜ (0.05) (0.95) + Á ˜ (0.05) (0.95) ˝
2 14 3
Ë 2¯ Ë 3¯ Ô˛
= 1 – {0.44 + 0.37 + 0.1463 + 0.036}
= 0.00776
The Random Variable 2.37
2.37 In the month of September, on an average, rain falls on 10 days. Find the probability (a) that the
rain will fall on just two days of a given week, (b) that in the first three days of a given week, it rains and
the remaining four days it won’t rain.
Solution
10 1
The probability that rain falls on any day is equal to =
30 3
The probability that rain falls on x days in a week is
x 7- x
Ê 7ˆ Ê 1 ˆ Ê 2 ˆ
ÁË x ˜¯ ÁË 3 ˜¯ ÁË 3 ˜¯
3 7-3
Ê 7ˆ Ê 1ˆ Ê 2ˆ
P(X = 3) = Á ˜ ÁË 3 ˜¯ ÁË 3 ˜¯ = 0.256
Ë 3¯
P(first three days rain and next four days no rain)
3 4
Ê 1ˆ Ê 2ˆ
= Á ˜ Á ˜ = 0.0073
Ë 3¯ Ë 3¯
1
2.38 If the probability of success is , how may trials are necessary in order that probability of at least
1 50
one success is greater than ?
2
Solution Let the random variable X denote the number of successes
1 49
Given: p= and q = 1 – p =
50 50
P(at least one success) = P(X ≥ 1)
= 1 – P(X = 0)
Ê nˆ
P(X = x) = Á ˜ p x (1 - p)n - x
Ë x¯
Ê nˆ
P(X = 0) = Á ˜ p0 (1 - p)n
Ë 0¯
n
Ê 49 ˆ
= Á ˜
Ë 50 ¯
n
Ê 49 ˆ 1
1 – P(X = 0) = 1 - Á ˜ >
Ë 50 ¯ 2
Ê 1ˆ 1
ÁË 1 - 2 ˜¯ > (0.98) ; 2 > (0.98)
n n
fi n = 35
2.38 Probability Theory and Random Processes
2.39 A certain airline, having observed that not all persons making reservation show up for the flight
sells 125 seats for a flight that holds only 120 seats. The probability that a passenger does not show up is
0.1 and the passengers behave independently.
(a) What is the probability that every passenger who shows up can take the flight?
(b) What is the probability that the flight departs with empty seats?
Solution Let X be the number of persons making reservations on a flight and show up for the flight. The
random variable. X has binomial distribution with n = 125 and p = 0.9.
(a) The probability that every passenger who shows up can take the flight is
P(X £ 120) = 1 – P(X > 120)
= 1 – P(X = 121) – P(X = 122) – P(X = 123) –P(X = 124) – P(X = 125)
Ê 125ˆ Ê 125ˆ ¸Ô
+ Á ˜ (0.9)124 (0.1) + Á ˜ (0.9)125 (0.1)0 ˝
Ë 124¯ Ë 125¯ Ô˛
= 0.9961
(b) The probability that the flight departs with empty seats
P(X £ 119) = P(X £ 120) – P(X = 120) = 0.9961 – P(X = 120) = 0.9961 – 7.57 ¥ 10–3
= 0.9885
2.40 A family has 5 children. Find the probability that there are (a) 3 boys and 2 girls, and (b) fewer
girls than boys.
Solution
n = 5 p = 1/2
3 2
Ê 5ˆ Ê 1 ˆ Ê 1 ˆ
(a) P(3 boys) = Á ˜ Á ˜ Á ˜ = 0.3125
Ë 3¯ Ë 2 ¯ Ë 2 ¯
There are fewer girls than boys if there are 0, 1 or 2 girls.
Then, P = P(0 girls) + P(1 girls) + P(2 girls)
5 4 2 3
Ê 1ˆ Ê 5ˆ 1 Ê 1 ˆ Ê 5ˆ Ê 1 ˆ Ê 1 ˆ
= Á ˜ +Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜
Ë 2¯ Ë 1¯ 2 Ë 2 ¯ Ë 2¯ Ë 2 ¯ Ë 2 ¯
1
= [1 + 5 + 10] = 0.5
32
Practice Problems
2.14 A random variable X has a binomial distribution with n = 10 and p = 0.1. Determine the following probabilities.
(a) P(X = 4) (b) P(X £ 2) (c) P(2 £ X < 5) (Ans. (a) 0.0111 (b) 0.9298 (c) 0.2623)
The Random Variable 2.39
2.15 A multiple-choice test has six questions, each of which has four possible answers. What is the probability that a
student will get four or more correct answers by just guessing? (Ans. 0.0376)
2.16 In an electronics laboratory, it is found that 10% of transistors are defective. A random sample of 20 transistors are
taken for inspection. (a) What is the probability that all are good? (b) What is the probability that at the most there are 3
defective transistors.
(Ans. (a) 0.1216 (b) 0.8671)
2.17 In a large consignment of electric bulbs 10% are known to be defective. A random sample of 20 is taken for
inspection. Find the probability that (i) all are good bulbs (ii) at most 3 are defective bulbs. (Ans. (i) 0.1216 (99) 0.8666)
n! n n
P( X1 = n1 , X 2 = n2 , ..., X k = nk ) = p 1 ... pk k (2.27)
n1 ! n2 ! ... nk ! 1
k
where  ni = n
i =1
Solved Problems
2.41 A lab attendant keeps a large number of resistors in a drawer. About 50 percent of these resistors
are 1 kW, about 30% are 10 kW and the remaining 20% are 1 MW. Suppose that 10 resistors are chosen
at random. What is the probability that there are exactly five 1 kW resistors, four 10 kW resistors and one
1 MW resistor?
Solution
p1 = P(R = 12 kW) = 0.5
p2 = P(R = 10 kW) = 0.3
p3 = P(R = 1 MW) = 0.2
Also, n = 10; n1 = 5; n2 = 4; n3 = 1
Let X1, X2 and X3 be the events of selecting a 1 kW resistor, 10 kW resistor and 1 MW resistor.
10!
P(X1 = 5, X2 = 4, X3 = 1) = (0.5)5 (0.3)4 (0.2)1 = 0.06378
5! 4!1!
2.5.3 Poisson Distribution
A discrete random variable X taking on one of the values 0, 1, 2 … is said to be a Poisson random variable
with parameter l, where l > 0, if its pmf is of the form
k -l
pX(k) = l e k = 0, 1, 2 ... (2.28)
k!
2.40 Probability Theory and Random Processes
•
lk
and FX(k) = e - l  u( x - k ) (2.31)
k =0 k!
The Poisson distribution has many applications in diverse areas because it may be used as an approximation
for a binomial random variable with parameters (n, p) when n is large and p is small so that their product np
is a moderate size. The Poisson distribution can be used in applications such as the number of telephone calls
arriving at a switchboard during various intervals of time, number of customers arriving at a bank during
various intervals of time, random failure of experiments, number of earthquakes occurring during some fixed
time span, and the number of electrons emitted from a heated cathode during a fixed time period are usually
modeled by Poisson random variables.
FX(k)
0.95
0.855
pX(k)
0.033 0.065
0.006 0.006
0 1 2 3 4 5 6 7 8 k 0 1 2 3 4 5 6 7 8 k
(a) (b)
Fig. 2.19 Poisson (a) density, and (b) distribution functions for l = 5
The Poisson density and distribution functions are shown in Fig. 2.19.
2.42 A random variable X is known to be Poisson with l = 4 (a) Plot the density and distribution
functions for this random variable. (b) What is the probability of the event 0 £ X £ 5?
Solution Given: l = 4
(a) The density function
The Random Variable 2.41
•
lk
fX(x) = e - l  d (x - k)
k =0 k!
-4 0
fX(0) = e ( 4) = e -4 = 0.0183
0!
e -4 ( 4)
fX(1) = = 0.0732
1!
e -4 ( 4)2
fX(2) = = 0.1465
2!
e -4 ( 4)3
fX(3) = = 0.1953
3!
-4 4
fX(4) = e ( 4) = 0.1953
4!
e -4 ( 4)5 fX(x)
fX(5) = = 0.1563
5!
-4 6 0.1953
fX(6) = e ( 4) = 0.1042
6!
e -4 ( 4)7
fX(7) = = 0.0595
7!
e -4 ( 4)8
fX(8) = = 0.0298
8!
e -4 ( 4)9 0 1 2 3 4 5 6 7 8 9 x
fX(9) = = 0.01323
9! Fig. 2.20 pdf of Poisson random variable
The plot of the density function is shown in Fig. 2.20.
The distribution function is given by
•
lk
FX(k) = e - l  u( x - k )
k =0 k!
•
l k e- l
or FX(k) = P[X £ k] = Â k!
k =0
e -4 (4)0
FX(0) = = 0.0183
0!
-4 0 -4 1
FX(1) = e (4) + e (4) = 0.0183 + 0.0732 = 0.0915
0! 1!
-4 0 -4 1 -4 2
FX(2) = e (4) + e (4) + e (4) = 0.0183 + 0.0732 = 0.1465 = 0.238
0! 1! 2!
2.42 Probability Theory and Random Processes
0.785
0.433
0 1 2 3 4 5 6 7 8 9 10 x
e - lT ( l T ) k
P(X = k) = (2.32)
k!
Thus the number of outcomes in an interval of length T is a Poisson distributed random variable with
parameter b = lT, where l is the average number of outcomes in the given interval.
1
Solution Given: l = 2/4 = and T = 10 min
2
b = lT
1
= (10) = 5
2
The Random Variable 2.43
e - b (b ) k
(a) P(X = k) =
k!
e -5 (0.5)0
P(X = 0) = = 0.6065
0!
(b) P(X £ 4) = 1 – P(X £ 3) = 1 – (X = 0) – P(X = 1) – P(X = 2) – P(X = 3)
e -5 5
P(X = 1) = = 0.03369
1!
e -5 (5)2
P(X = 2) = = 0.08422
2!
e -5 (5)3
P(X = 3) = = 0.1404
3!
P(X £ 4) = 1 – (0.6065 + 0.03369 + 0.08422 + 0.1404)
= 0.1352
REVIEW QUESTIONS
6. What is Poisson random variable? Explain in brief.
7. Prove that for large values of n, binomial distribution can be approximated to Poisson distribution.
Solved Problems
2.44 In a country, the suicide rate is five per million people per month. Find the probability that in a city
of a population of 10,00,000, there will be at most five suicides in a month.
Solution Given: n = 10,00,000
5
p= = 5 ¥ 10–6
10,00,000
np = 10,00,000 (5 ¥ 10–6) = 5
Since np < 10, the Poisson distribution with l = 5 can be assumed for this problem.
Using Poisson pdf, we can find
4
lk
P(X £ 4) = Â e- l k!
k =0
4
e -5 (5)k
= Â k!
k =0
È 5 52 53 54 ˘
= e -5 Í1 + + + + ˙
ÍÎ 1 2! 3! 4! ˙˚
= 0.4404
2.45 The accident rate in a year is one per thousand people. Given that an insurance company has insured
5,000 persons from the population, find the probability that at most two persons will incur this accident.
2.44 Probability Theory and Random Processes
Solution
1
Given: p= = 0.001
1000
n = 5000
np = 5
Since np < 10, we can use Poisson distribution with l = 5
e - l ( l )k
P[X = k] =
k!
P[X £ 2] = P[X = 0] + P[X = 1] + P[X = 2]
e -5 (5)0 e -5 5 e -5 (5)2
= + +
0! 1! 2!
Ê 52ˆ
= Á 1 + 5 + ˜ e -5 = 0.1247
Ë 2¯
2.46 Suppose you buy a lottery ticket in 50 lotteries in each of which your chance of winning a prize is
1
. What is the (approximate) probability that you will win a prize (a) at least once? (b) exactly once?
100
(c) at least twice.
Solution
Ê 1 ˆ
l = np = 50 Á = 0.5
Ë 100 ˜¯
e - l ( l )k
P(X = k) =
k!
(a) P(X ≥ 1) = 1 – P(X = 0) = 1 – e–0.5 = 0.3935
-0.5
P(X = 1) = e (0.5)
(b) = 0.3037
1
(c) P(X ≥ 2) = 1 – P(X = 0) – P(X = 1)
= 1 – e–0.5 – e–0.5 (0.5)
= 0.0902
Poisson Approximation to the Binomial Distribution
Consider a binomial random variable X with parameters (n, p) and pmf
Ê nˆ
PX(x) = Á ˜ p x (1 - p)n - x , x = 0,1,2,..., n (2.33)
Ë x¯
Ê nˆ n!
where Á ˜ is a binomial coefficient given by . That is, the pmf involves evaluating n! whose
Ë x¯ (n - x )! x !
value is very high for even moderate values of n. Therefore, it is required to develop an approximate method
for large values of n.
The Random Variable 2.45
Let us define a parameter l which is equal to the product of n and p. That is, l = np from which we can
write p = l/n. Substituting this value in the expression for pmf, we obtain
x n- x
PX(x) = Ê ˆ ÊÁ l ˆ˜ ÊÁ 1 - l ˆ˜
n
(2.34)
ÁË x ˜¯ Ë n ¯ Ë n¯
x n- x
n! Ê lˆ Ê lˆ
= 1- ˜
(n - x )! x ! ÁË n ˜¯ ÁË n¯
n- x
n(n - 1)(n - 2) (n - x + 1) (n - x )! Ê lˆ
= (l ) x Á 1 - ˜
x !(n - x )! n x Ë n¯
n- x
Ê 1ˆ Ê 2ˆ Ê x - 1ˆ x Ê lˆ
n x Á1 - ˜ Á1 - ˜ ÁË 1 - n ˜¯ l ÁË 1 - n ˜¯
Ë n¯ Ë n¯
=
x! nx
n -x
Ê 1ˆ Ê 2ˆ Ê x - 1ˆ x Ê lˆ Ê lˆ
ÁË 1 - n ˜¯ ÁË 1 - n ˜¯ ÁË 1 - n ˜¯ l ÁË 1 - n ˜¯ ÁË 1 - n ˜¯
=
x!
we get
l x e- l
lim PX ( x ) = (2.35)
n Æ• x!
which is a Poisson distribution.
2.47 Assume automobile arrives at a gasoline station are Poisson and occur at an average rate of 50/h.
If all cars are assumed to require one minute to obtain fuel, what is the probability that a waiting line will
occur at the pump?
T = 1 minute
50 5
b = lT = (1) =
60 6
A waiting line occurs if two or more cars arrive in any one-minute interval. Therefore, the probability that
a waiting line occurs can be obtained by evaluating P(X ≥ 2).
P(X ≥ 2) = 1 – P(X £ 1)
= 1 – FX(1) – FX(0)
For Poisson distribution,
•
bk
FX(x) = e–b  k ! u( x - k )
k =0
5
b =
6
5
P(X ≥ 2) = 1 – e
- È 5˘
Í1 + = 0.2032
6
Î 6 ˙˚
2.48 A manufacturer of cotton pins knows that 5% of his products are defective. If he sells pins in boxes
of 100 and guarantees that not more than 4 pins will be defective, what is the approximate probability that
a box will fail to meet the guaranteed quality?
Solution Given:
5
p= = 0.05 and n = 100
100
l = np = 100(0.05) = 5
We have
-l x
P(X = x) = e l
x!
The box will fail to meet the guaranteed quality if the box contains more than 4 defective pins.
P(X > 4) = 1 – P(X £ 4)
= 1 – {P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4)}
Ê 52 53 54 ˆ
= 1 – e–5 Á 1 + 5 + + +
Ë 2! 3! 4! ˜¯
= 0.5595
2.49 Wireless sets are manufactured with 25 soldered joints, each on the average of 1 joint in 500
defective. How many sets can be expected to be free from defective joints in a consignment of 10000
sets?
1
Solution Given: n = 25 and p =
500
The Random Variable 2.47
Let X be the random variable that represents the number of defective joints in a set.
-l x
P(X = x) = e l
x!
0
-1/20 Ê 1 ˆ
P(X = 0) = e ÁË 20 ˜¯
= e -1/20 = 0.9512
0!
The expected number of sets that are free from defects is
(0.9512) ¥ (10,000) = 9512
2.50 The number of particles emitted from a radioactive source in a specified interval follows Poisson
1
distribution. If the probability of no emission equals , what is the probability that 2 or more emissions
occur? 3
Solution Let X be the random variable that denotes the number of particles emitted. We have
-l x
P(X = x) = e l
x!
P(X = 0) = e–l = 1
3
from which l = ln 3
P(2 or more emissions) = P(X ≥ 2)
1 – {P(X = 0) + P(X = 1)}
-l
P(X = 1) = e l = 1 ln 3
1! 3
Ï1 1 ¸ 2 1
P(X ≥ 2) = 1 - Ì + ln 3˝ = - ln 3
Ó3 3 ˛ 3 3
= (2 - ln 3)
3
2.51 A company markets a modem that has a bit error probability of 10–4, and the bit errors are
independent. The buyer will test the modem by sending a known message of 104 digits and checking the
received message. He will reject the modem if the number of errors are more than two. Find the probability
that the customer will buy the modem.
Therefore,
P(X £ 2) = P(X = 0) + P(X = 1) + P(X = 2)
-l x
P(X = x) = e l
x!
È 12 ˘
P(X £ 2) = e -1 Í1 + 1 + ˙ = e -1 (2.5) = 0.9197
ÎÍ 2! ˚˙
2.52 In a lot of semiconductor diodes, 1 in 400 diodes is defective. If the diodes are packed in boxes
of 100, what is the probability that any given box of diodes will contain (a) no defective, (b) 1 or more
defective, and (c) less than 2 defectives diodes?.
1
Solution The probability p =
400
n = 100
100 1
l = np = = = 0.25
400 4
Let X be the random variable that represents the defective diode
-l x
P(X = x) = e l
x!
(a) P(X = 0) = e–0.25 = 0.7788
(b) P(1 or more defective ) = P(X ≥ 1) = 1 – P(X = 0)
= 1 – 0.7788 = 0.2212
(c) P(less than 2 defectives) = P(X < 2) = P(X = 0) + P(X = 1)
= 0.7788 + e–0.25 ¥ 0.25 = 0.9735
2.53 The number of page requests that arrive at a Web server is a Poisson random variable with an
average of 3000 requests per minute
(a) Find the probability that there are no requests in a 100 ms period.
(b) Find the probability that there are between 2 and 4 requests in a 100 ms period.
Solution Given the number of page requests is 300 per minute, which is equal to 50 requests per second
or 5 requests/100 ms.
Therefore, l = 5.
-l
P[X = x] = e (l ) x
x!
P(X = 0) = e–l = e–5
4
(5) x e -5
P(2 £ X £ 4) = Â x!
x=2
È 2 (5)3 (5)4 ˘
-5 (5)
= e Í + + ˙
ÎÍ 2! 3! 4! ˙˚
The Random Variable 2.49
È1 5 25 ˘
= e -5 52 Í + + ˙
Î 2! 3! 4! ˚
-5 È 1 5 25 ˘
= e (25) Í + + ˙
Î 2 6 24 ˚
= 0.4
2.54 Let X be a Poisson random variable with parameter l. What value of l maximizes P(X = k)?
fi e- l [ l - k ] l k -1 = 0
from which l = k
1
2.55 In an industrial area, the average number of fatal accidents per month is . The number of accidents
3
per month is described by a Poisson distribution. What is the probability that 6 months will pass without
an accident.
1 1
Solution Given: l = ; T = 6 fi b = lT = (6) = 2
3 3
Let X be the random variable that represents the number of accidents.
-b x
P(X = x) = e b
x!
-2 0
P(X = 0) = e (2) = e -2 = 0.135
0!
Ê l2 l4 ˆ
= e- l Á1 + + + ˜
Ë 2! 4! ¯
l -l
= e - l (e e )
2
-2 l
= 1+ e
2
2.57 The number of errors in a textbook follow a Poisson distribution with a mean of 0.02 error per page.
What is the probability that there are two or less errors in 100 pages?
Solution
l = 0.02 ¥ 100 = 2
P(X £ 2) = P(X = 0) + P(X = 1) + P(X = 2)
-2 -2 2
= e -2 + e 2 + e (2)
1! 2!
= e–2 (1 + 2 + 2) = 5e2
= 0.6766
2.58 A textbook that contains 400 pages has 250 errors. Find the probability that a given page contains
(a) 2 errors, and (b) 2 or more errors.
1
Solution Given: n = 250 p =
400
250 5
l = np = =
400 8
2
Ê 5 ˆ -5/8
ÁË 8 ˜¯ e
(a) P(2 errors) = = 0.1045
2
0
Ê 5 ˆ -5/8
ÁË 8 ˜¯ e
P(0 errors) = = 0.535
0!
1
Ê 5 ˆ -5/8
ÁË 8 ˜¯ e
(b) P(1 error) = = 0.3345
1!
P = 1 – P(0 or 1 error) = 1 – P(0 errors) – P(1 error)
= 1 – (0.535) – 0.3345
= 0.1305
The Random Variable 2.51
Practice Problems
2.18 If customer arrive at a counter in accordance with Poison with mean of 2 per minute, find the probability that the
interval between two consecutive arrivals is more than 1 minute. (Ans. 0.135)
2.19 The number of telephone calls that arrive at a telephone exchange is modelled as a Poisson random variable with
15 average calls per hour
(a) What is the probability that there are 5 or less calls in one hour?
(b) What is the probability that there are exactly 20 calls in one hour (Ans. (a) 0.00279; (b) 0.0418)
2.20 A random variable X is known to be Poisson with l = 5 (a) plot the density and distribution function for this
random variable. (b) What is the probability of the event (0 £ X £ 6). (Ans. 0.759)
2.21 In a city the average murders per week is 5 and their occurrences follow a Poisson distribution. What is the
probability that there will be 8 or more murders in a given week? (Ans. 0.133)
2.22 In an industry, the average number of fatal accidents per month is 1. The number of accidents per month follows a
Poisson distribution. What is the probability that 6 months will pass without a fatal accident. (Ans. 0.00248)
2.23 Find the probability that at most 5 defective fuses will be found in a box of 200 fuses if experience shows that 2%
of such fuses are defective. (Ans. 0.781)
2.24 A switchboard can handle only 24 phone calls per minute. If the incoming calls per minute follow a Poisson
distribution with parameter 16, find the probability that the switchboard is overloaded in any one minute. (Ans. 0.29)
Solved Problems
1 pq t
= pqt(1 + q + …) = pq t = = qt
1- q p
2.52 Probability Theory and Random Processes
P( X > s + t ) q s + t
P(X > s + t/X > t) = = t = qs
P( X > t ) q
We have, P(X > s) = qs
Therefore, P(X > s + t | = X > t) = P(X > s)
2.60 The probability that rain will occur on any given day during summer (May to June) equals 0.15.
Assuming independence from day to day, what is the probability that it rains on May 4?
Solution We have P(X = n) = pqn–1. Here, n = 4.
Therefore,
P(X = 4) = (0.15) (0.85)3
= 0.092
2.61 A dice is thrown repeatedly until 5 appears. What is the probability that it must be thrown more
than 4 times?
Solution Given n = 4
1
The probability of getting 5 in throwing a dice is
6
1 5
p= and q =
6 6
P(X = n) = pqn–1
P(X > 4) = 1 – {P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4)}
= 1 – {p + pq + pq2 + pq3}
= 1 – p(1 + q + q2 + q3)
= 1 - p (1 - q ) = q 4
4
1- q
4
Ê 5ˆ
= Á ˜
Ë 6¯
2.62 Suppose that a trainee soldier shoots a target in an independent fashion. The probability that the
target is hit on any one shot is 0.7.
(a) What is the probability that the target would be hit on 10th attempt?
(b) What is the probability that it takes him less than 4 shots?
(c) What is the probability that it takes him an even number of shots?
= p + pq + pq2 + pq3
p(1 - q 3 )
= = 1 - q3
1- q
= 1 – (0.3)3 = 0.973
(c) P(taret hit in even number of shots)
= P(X = 2) + P(X = 4) + P(X = 6) + …
= pq + pq3 + pq5 + …
pq
= pq (1 + q2 + q4 + …) =
1 - q2
(0.7)(0.3)
= = 0.23077
1 - (0.3)2
2.63 A student passing his examination on any attempt is 0.75. What is the probability that he will pass
the examination (a) in the first attempt, and (b) less than three attempts?
Solution Let X denote the number of attempts.
The probability that the student passes the examination in the third attempt is
P(X = 3) = pq2 = 0.75(0.25)2 = 0.0469
The probability that he will pass in less than three attempts
P(X < 3) = P(X = 1) + P(X = 2)
p + pq = p(1 + q) = 0.75 (1 + 0.25)
= 0.9375
2.64 The probability of a successful optical alignment in the assembly of an optical storage product is
0.8. Assume that trials are independent, what is the probability that the first successful alignment requires
exactly four trials?
Solution Let X denote the number of trials required to achieve the first successful alignment. Then
P(X = 4) = pq4–1 = pq3
Given p = 0.8 and q = 0.2
p(X = 4) = 0.8(0.2)3 = 0.0064
Practice Problems
2.25 If the probability that a student passes the examination equals 0.4, what is the probability that he need fewer than
5 attempts before he passes the examination? (Ans. 0.92)
2.26 The probability that a candidate can pass in an examination is 0.6.
(a) What is the probability that he pass in the third trial?
(b) What is the probability that he pass before the third trial? (Ans. 0.096, 0.84)
2.54 Probability Theory and Random Processes
Solved Problems
2.65 A coin is tossed until the first head occurs. If the tosses are independent and the probability of a
head occurring is p, find the value of p so that the probability that an odd number of tosses is required is
equal to 0.75.
Solution Let X be the random variable that represent number of tosses required to get first head.
P(odd number of tosses required to get first head)
= P(X = 1) + P(X = 3) + P(X = 5) + …
= p + pq2 + pq4 + …
1
= P(1 + q2 + q4 + …) 1 + q2 + q 4 + =
1 - q2
p P p
= = =
1- q 2 (1 - q )(1 + q ) p(1 + q )
1
=
1+ q
1 0.25 1
Given = 0.75 fi q = =
1+ q 0.75 3
Solution
P(X = n + k) = pqn+k–1
•
P(X > n) = Â pq k - 1 = pq n + pq n + 1 + pq n + 2 +
k = n +1
pq n
= pqn[1 + q + q2 + …] = = qn
1- q
P( X = n + k )
P(X = n + k/X > n) = ∵P + q = 1
P ( X > n)
pq n + k -1
= = pq k -1
qn
We have P(X = k) = pqk–1
Therefore, P(X = n + k/X > n) = pqk–1
2.67 A and B shoot independently until each has his own target. The probability that A and B hit the
target is 2 and 4 respectively. Find the probability that B will require more shots than A.
3 7
The Random Variable 2.55
Solution Let A require X trials to first hit the target. Then X follows geometric distribution with pdf
P(X = n) = p1q1n–1
2 1
Given p1 = ; q1 =
3 3
n -1
fi P(X = n) = 2 ÊÁ 1 ˆ˜ n = 1, 2, ...
3 Ë 3¯
Let B require Y trials to first hit the target. Then
P(Y = n) = p2q2n–1
4 3
Given p2 = fi q2 =
7 7
n -1
4 Ê 3ˆ
P(Y = n) = ; n = 1, 2,
7 ËÁ 7 ˜¯
The event that B requires more shots than A occurs when X = n and Y = n + 1 or n + 2 …
Therefore, P(B requires more shots than A)
•
= Â P( X = n) and P(Y = n + 1, n+2 )
n =1
•
2 Ê 1ˆ
n -1 ÏÔ 4 Ê 3ˆ
n
4 Ê 3ˆ
n +1 ¸Ô
=  3 ÁË 3 ˜¯ Ì
7 ÁË 7 ˜¯ + 7 ÁË 7 ˜¯ + ˝
n =1 ÓÔ ˛Ô
4 Ê 3 ˆ ÏÔ ¸Ô
• n -1 n 2
2 Ê 1ˆ 3 Ê 3ˆ
=  3 ÁË 3 ˜¯ ◊
7 ÁË 7 ˜¯ Ô
Ì1 + +
7 ÁË 7 ˜¯
+ ˝
n =1 Ó ˛Ô
• n -1 n
2 Ê 1ˆ 4 Ê 3ˆ 1
=  3 ÁË 3 ˜¯ 7 ÁË 7 ˜¯ Ê 3ˆ
n =1
ÁË 1 - 7 ˜¯
• n -1 n • n -1
2 Ê 1ˆ Ê 3ˆ 2 Ê 1ˆ
=  3 ÁË 3 ˜¯ ÁË 7 ˜¯ = 7  ÁË 7 ˜¯
n =1 n =1
2 Ê 1 ˆ 2Ê 1 ˆ 1
=
7 ÁË 1 + 7 + ˜¯ = 7 Á 1˜ 3
=
ÁË 1 - ˜¯
7
2.56 Probability Theory and Random Processes
2.68 If the probability that an applicant for a driver’s license will pass the road test on any given trial is
0.8, what is the probability that he will finally pass the test
(a) On the fourth trial, and
(b) In less than 4 trials?
Solution Let X represent the number of trials required to achieve the first success. Then X is a random
variable with geometric distribution
P(X = x) = pqx–1, x = 1, 2, 3 … •
Given p = 0.8 fi q = 0.2
(a) P (passing the test in 4th trial) = P(X = 4)
P(X = 4) = 0.8(0.2)4–1 = 0.8(0.2)3
= 0.0064
(b) P(passing the test in less than 4-trials)
P(X < 4) = P(X = 1) + (X = 2) + P(X = 3)
3 3
= Â pq x -1 = Â (0.8(0.2) x -1
x =1 x =1
In binomial distribution, we fix the number of trials (n) and are concerned with number of successes in n
trials. whereas in a negative binomial distribution, we fix the number of successes and then find the number
of trials required.
In independent trials, each resulting in a success with probability p, the probability of r successes occurring
before m failures is given by
The Random Variable 2.57
r + m -1
Ê n - 1ˆ r
 ÁË r - 1˜¯ p (1 - 1)
n-r
n=r
Solved Problem
2.69 The probability of a hockey player A scoring a goal through penalty corner is 60%. In a match, what
is the probability that A scores his third goal in the 5th penalty corner?
Solution
Given: p = 0.6 fi q = 0.4
n = 5, r = 3
Ê 5 - 1ˆ
P(X = 5) = Á (0.6)3 (1 - 0.6)5 - 3
Ë 3 - 1˜¯
Ê 4ˆ
= Á ˜ (0.6)3 (0.4)2
Ë 2¯
= 0.20736
Practice Problem
2.27 The probability that a cricket player hits a boundary is 0.25. In an over, what is the probability that he hits the 4th
boundary in sixth ball? (Ans. 0.02197)
Any random variable X whose pmf is given by Eq. (2.38) is said to be a hypergeometric random
variable.
Solved Problems
2.70 A vendor supplies 100 heaters to a shop every month. Before accepting the heaters, the owner
selects 10 heaters randomly and accepts if none of these are defective. If one or more defectives are found
then all 100 heaters are inspected. If two defectives are found, what is the probability that 100% inspection
is required?
Solution Let X be the number of defectives found. Then 100% inspection is required for X ≥ 1.
Therefore,
2.58 Probability Theory and Random Processes
P(X ≥ 1) = 1 – P(X = 0)
Given: N = 100, n = 10, k = 0
From Eq. (2.37)
Ê 10ˆ Ê 90ˆ
ÁË 0 ˜¯ ÁË 10 ˜¯
P(X = 0) = = 0.33
Ê 100ˆ
ÁË 10 ˜¯
2.71 In a college, the number of boys and girls are 700 and 300 respectively. Twelve members are drawn
randomly to form a committee. What is the probability that committee consists of all girls?
Ê 300ˆ Ê 700ˆ
ÁË 12 ˜¯ ÁË 0 ˜¯
P(‘0’ boys and 12 girls) = = 4.54 ¥ 10 -7
Ê 1000ˆ
ÁË 12 ˜¯
Practice Problem
2.28 A purchaser of electronic components buys diodes in lots of 10. He inspects 3 diodes randomly from a lot and
accepts the lot only if all 3 are non-defective. If 25% of the lots have 4 defective diodes and 75% have only 1, what
proportion of lots does the purchaser reject? (Ans. 43.3 per cent)
Ï1
Ô k = x1 , x2 , xn
PK(k) = Ì n (2.39)
Ô0 otherwise
Ó
The phase of a radio-frequency sinusoid can be modelled as a uniform random variable. Although the
transmitter knows the phase of the sinusoid, the receiver may have no information about the phase. In this
case, phase at the receiver is modelled as a random variable with uniform distribution over the interval
(0, 2p). The pmf of a discrete random variable with uniform distribution is shown in Fig. 2.22.
The Random Variable 2.59
Px(x)
x
0 1 2 3 4
x-a
fi FX(x) = for a £ x £ b (2.41)
b-a
A plot of the pdf and CDF of a uniform random variable is shown in Fig. 2.23. The uniform random
variable with zero mean and unit variance is denoted as U(0, 1).
fX(x) fX(x)
1 1
b–a
a x a x
b b
(a) pdf (b) CDF
REVIEW QUESTIONS
8. Mention the applications are uniform, random variable.
9. Draw the pdf and CDF of a uniform random variable.
10. Briefly discuss about uniform distribution.
Solved Problems
2.72 If X is uniformly distributed over (0, 5), calculate the probability that (a) X < 2 (b) X > 4
(c) 1 < X < 3.
2.73 If Y is a random variable uniformly distributed over (0,5), find the probability that the roots of the
equation 4x2 + 4xY + Y + 2 = 0 are both real.
Solution For uniform distribution
Ï 1
Ô for a £ x £ b
fX(x) = Ì b - a
Ô 0 otherwise
Ó
Given: b = 5 and a = 0. Therefore,
Ï1
Ô for 0 £ x £ 5
fX(x) = Ì 5
Ô0 otherwise
Ó
The Random Variable 2.61
2.74 The thickness of a sheet in an automobile component is uniformly distributed between 0.9 and 1.10
millimetres.
(a) Determine the CDF of sheet thickness.
(b) Determine the proportion of sheets that exceed 1.0 mm thickness.
(c) What thickness is exceeded by 20% sheets?
5[1.1 – x] = 0.2
1.1 – x = 0.04
x = 1.1 – 0.04 = 1.06 mm
2.62 Probability Theory and Random Processes
2.75 If X has a uniform distribution in (–a, a), a > 0, find a such that P(| X| < 1) = P(| X| > 1).
1
That is, = 0.5 fi a = 2
a
2.76 If X is a uniformly distributed random variable with pdf U(–1, 3), find P(X < 0).
Solution For a uniform random variable in the interval (–1, 3), the pdf is given by
Ï1
Ô for - 1 £ x £ 3
fX(x) = Ì 4
ÔÓ 0 otherwise
P(X < 0) = 1 – P(X ≥ 0)
3 3
1 3
P(X ≥ 0) = Ú f X ( x) dx = Ú 4 dx = 4
0 0
P(X < 0) = 1 – P(X ≥ 0)
3 1
= 1- =
4 4
2.77 The number of personal computers sold daily at a CompuWorld is uniformly distributed with a
minimum of 2000 PCs and a minimum of 5000 PCs.
(a) the probability that daily sales will fall between 2,500 and 3,000 PC.
(b) what is the probability that CompuWorld will exactly sell 2500 PCs?
(c) what is the probability that CompuWorld will exactly sell 2500 PCs?
Solution Let X be the random variable that represents the number of PCs sold. X follows uniform
distribution with a = 2000 and b = 5000
1 1
FX(x) = = for 2000 £ x £ 5000
5000 - 2000 3000
=0 otherwise
The Random Variable 2.63
3000 3000
1 1
(a) P(2500 £ x £ 3000) = Ú f X ( x ) dx = Ú 3000
dx =
3000
(3000 - 2500)
2500 2500
1 1
= (500) =
3000 6
(b) P(That sales is at least 4000 PCs) = P(X ≥ 4000)
5000 5000
1
= Ú f X ( x ) dx = Ú3000
dx
4000 4000
1 1000 1
= (5000 - 4000) = =
3000 3000 3
(c) P(The sales is exactly 2500) = P(X = 2500) = 0
2.78 Starting at 5.00 a.m. every half an hour, there is a flight from San Francisco airport to Los Angeles
International airport. Suppose that none of these planes is completely sold out and that they always have
room for passengers. A person who wants to fly to Los Angel arrives at the airport at a random time
between 8.45 a.m. and 9.45 a.m. Find the probability that she waits
(a) at most 10 minutes, and
(b) at least 15 minutes.
Solution There is a flight for every half an hour. Hence, the flight timings are 5.00 a.m., 5.30 a.m. …
8.30 a.m. 9.00 a.m., 9.30. a.m., 10 a.m. and so on.
A person arrives between 8.45 a.m. and 9.45 a.m. Therefore, her arrival time can be thought of as a
random variable X with uniform distribution given by
Ï1
Ô for 0 < x £ 60
fX(x) = Ì 60
Ô0 otherwise
Ó
(a) The person waits almost 10 minutes if she arrives at the airport between 8.50 a.m. and 9.00 a.m. or
between 9.20 a.m. and 9.30 a.m.
Therefore we have to find
P(5 < X < 15) + P(35 < X < 45)
which is equal to
15 45
1 1 1 1 1
Ú 60
dx + Ú
60
dx =
60
(15 - 5) +
60
(45 - 30) =
3
5 35
(b) In second problem we have to find P(she waits at least 15 minutes)
She waits at least 15 minutes if she arrives at the airport between 8.45 a.m. and 9.00 a.m or 9.15 a.m.
and 9.30 a.m.
P(0 < X < 15) + P(30 < X < 45)
15 45
1 1
= Ú 60 dx + Ú 60 dx
0 30
2.64 Probability Theory and Random Processes
1 1
= (15) + (15)
60 60
1
=
2
2.79 You know that your college bus arrives at your boarding point at same time uniformly distributed
between 8.00 and 8.20 a.m. (a) What will be the probability that you will have to wait longer than 5
minutes? (b) If at 8.10 a.m. the bus has not yet arrived, what is the probability that you will have to wait at
least an additional 5 minutes?
Solution For our convenience, we leave the hours data given in the problem and consider only the time
interval in minutes. Since the time interval is 20 minutes and uniformly distributed, we can write
Ï1
Ô for 0 £ x £ 20
fX(x) = Ì 20
Ô0 otherwise
Ó
(a) The probability that you will have to wait longer than 5 minutes is
20 20
1 15 3
P[X ≥ 5] = Ú f X ( x ) dx = Ú 20 dx = 20 = 4
5 5
(b) The probability that you will have to wait at least an additional 5 minutes if the bus has not yet
arrived at 8.10 a.m. is
15 15
1 1 5 1
P[10 £ x £ 15] = Ú f X ( x) dx = Ú 20 dx = 20 (15 - 10) = 20 = 4
10 10
2.80 A point is chosen at random on the line segment [0, 5]. What is the probability that the chosen point
lies between 2.5 and 4?
Solution The pdf of the random chosen point has a uniform distribution with pdf
Ï1
Ô for 0 £ x £ 5
fX(x) = Ì 5
Ô0 otherwise
Ó
The probability that the chosen point lies between 2.5 and 4 is
4
1 1
P[2.5 £ x £ 4] = Ú 5
dx = [1.5] = 0.3
5
2.5
2.81 A point is chosen at random on a line segment of length L. Find the probability that the ratio of the
shorter to the longer segment is less than 4.
Solution The length of the line segment is L. Since a point can be chosen at random, it can be assumed to
have uniform distribution. The density function of the random variable is
The Random Variable 2.65
Ï1
Ô for 0 £ x £ L
fX(x) = Ì L
ÔÓ 0 otherwise
A point is selected at random such that the ratio of the shorter to longer segment is less than 1/4. This is
possible when the point lies between 0 and L/5 or 4L/5 and L. Hence, we have to find
È L˘ È4 ˘
P Í0 £ X £ ˙ + P Í £ X £ L ˙
Î 5˚ Î5 ˚
L
5
È L˘ 1 1 ÈL˘ 1
P Í0 £ X £ ˙ =
Î 5˚ Ú L dx = L ÍÎ 5 ˙˚ = 5
0
L
È4 ˘ 1 1 È 4L ˘ 1
P Í £ X £ L˙ =
Î5 ˚
Ú L
dx = Í L -
L Î
=
5 ˙˚ 5
4L
5
È L˘ È 4L ˘ 1 1 2
P Í0 £ X £ ˙ + P Í £ x £ L˙ = + =
Î 5˚ Î 5 ˚ 5 5 5
Practice Problems
2.29 A random variable X has uniform distribution over (–3, 3). Compute (a) P(X < 2) (b) P(|X| < 2) (c) P(1X – 21 < 2)
Ê 5 2 1 ˆ
(d) Find k for which P(X > k) = 1/3. ÁË Ans. (a) 6 (b) 3 (c) 2 (d) k = 1˜¯
2.30 If X is uniformly distributed over (0, 10), calculate the probability that (a) X > 6 (b) 3 < X < 8.
Ê 4 1ˆ
Ans. (a) , (b) ˜
ËÁ 10 2¯
2.31 Buses arrive at a specified stop at 15-minutes interval starting at 7 a.m. That is, they arrive at 7, 7.15, 7.30, 7.45
and so on. If a passenger arrives at the stop at a random time that is uniformly distributed between 7 and 7.30 a.m. find
the probability that he waits (a) less than 5 minutes for a bus, and (b) at least 12 minutes for a bus.
(Ans. (a) 1/3 (b) 1/5)
2.32 A point is chosen at random on the line segment [0, 10]. What is the probability that the chosen point lies between
5 and 7. (Ans. 0.2)
2.33 If the random variable k is uniformly distributed over (1, 7), then what is the probability that the roots of the
equation x2 + 2k x + (2k + 3) = c are (a) real (b) equal. (Ans. 2/3)
2.34 Consider a random variable X ~ U(a, b) where a < b. Find a such that
Ê b - aˆ
P(| X| > a) = P(|X| < a). ÁË Ans : 4 ˜¯
2.35 Find the probability that the following equation has real roots,
Ê 5ˆ
x2 + 3xY + 1 = 0, where Y ~ U(–4, 4). Ans : ˜
ËÁ 6¯
2.66 Probability Theory and Random Processes
1 2
/2s X2
fX(x) = e-( x - m X ) ; -• < x < • (2.42)
2p s X2
The parameter sX2 is referred to as variance. An example of Gaussian pdf is shown in Fig. (2.24). The pdf
is a bell-shaped curve that is symmetric about mX which is the mean of X and has a width that is proportional
to sX.
fX(x) fX(x)
mx x
(a) (b)
Fig. 2.24 (a) Density and distribution (b) Function of Gaussian random variable
The normal distribution with mean mX and variance s2X is usually designated by the shorthand notation
X ~ N(mX, s2X). (2.43)
This is read as X is distributed normally with mean mX and variance When mX = 0 and s2X. s2X
= 1 then the
random variable is called standard normal random variable and is designated as X ~ N(0, 1). However, in
engineering literature, the term Gaussian is much more common.
Let us assume
•
- z 2 /2
I = Úe dz (2.46)
-•
• •
- ( z 2 + y2 )/2
= Ú Úe dz dy (2.48b)
-• -•
= 2p
I = 2p
•
1 - ( x - m X )2 /2s X2
Hence, Úe dx = 1
2ps X2 -•
2.68 Probability Theory and Random Processes
If X is a random variable with mean mX and standard deviation sX, then the random variable
X - mX (2.49)
Z =
sX
is a normal random variable with zero mean and standard deviation 1.
È X - mX ˘
=fÍ ˙ (2.52)
Î sX ˚
X - mX
If X is normally distributed with parameters mX and sX2 then Z = is a standard normal random
variable. sX
The Random Variable 2.69
Table 2.4 Area f(x) under the standard normal curve to the left of x
X .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
.0 .5000 .5040 .5080 .5120 .5160 .5199 .5239 .5279 .5319 .5359
.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753
.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141
.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517
.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879
.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224
.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549
.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852
.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133
.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389
1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621
1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830
1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015
1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177
1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319
1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441
1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545
1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633
1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706
1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767
2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817
2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857
2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890
2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916
2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936
2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952
2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964
2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974
2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981
2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986
3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990
3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993
3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995
3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997
3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998
2.70 Probability Theory and Random Processes
È1 Ê x ˆ˘
f (x) = Í1 + erf Á ˙ (2.55)
Î2 Ë 2 ˜¯ ˚
The Q-function The Q-function is defined as
•
1 - y2 /2
Q(x) =
2p
Úe dy (2.56)
x
This is most widely used in electrical engineering
Comparing with Eq. (2.53) we can observe that the Q-function in terms of error function is given by
1È Ê x ˆ˘
Q(x) = Í1 - erf Á ˙ (2.57)
2Î Ë 2 ˜¯ ˚
The Q-function satisfies the relation
Q(–x) = 1 – Q(x) (2.58)
from which we can get
È x - mX ˘
FX(x) = f(x) = 1 – Q Í ˙ (2.59)
Î sX ˚
Problem-Solving Procedure Consider a random variable X ~ N(50, 2). If we want to find the prob-
ability that X is less than or equal to 54, that is P[X £ 54] = F[54], we first define a random variable
x - mX
Z =
sX
whose distribution is N(0, 1).
Since we are interested at x = 54, we obtain
x - m X 54 - 50
Z = = =2
sX 2
Now the probability that the Z random variable is less than or equal to 2 is equal to the probability that the
original random variable X is less than or equal to 54.
F(54) = f(2) = 0.9772.
The Random Variable 2.71
REVIEW QUESTIONS
11. Explain in detail about normal distribution
12. Mention the properties of normal distribution
13. Define error function,
14. Define Q-function
15. Draw pdf and CDF of normal distribution.
Solved Problems
2.82 Suppose the current in a diode is assumed to follow a normal distribution with a mean of 10 mA
and a variance of 4 mA. What is the probability that a measurement will exceed 12 mA?
Solution Let the diode current in milliamps is represented by a random variable X. The probability that
X - m X 12 - 10
X > 12 can be represented by P(X > 12). Let us consider a random variable Z = = = 1. .
sX 2
We note that X > 12 corresponds to Z > 1. From the Table, 2.4
P(X > 12) = P(Z > 1) = 1 – P[Z £ 1]
= 1 – 0.8413
= 0.1587
2.83 Assume X is normally distributed with mean of 10 and a standard deviation of 2. Determine the
following.
(a) P(X < 13) (b) P(X > 9) (c) P(6 < X < 14) (d) P(2 < X < 4)
Solution
X - m X X - 10
Let Z= =
sX 2
Ê 13 - 10 ˆ
(a) P(X < 13) = P Á Z < = P[ Z < 1.5] = 0.9332
Ë 2 ˜¯
Ê 9 - 10 ˆ
(b) P(X > 9) = P Á Z > = P[ Z > - 0.5]
Ë 2 ˜¯
= 1 – P(Z £ –0.5) = 1 – f(–0.5)
= 1 – [1 – f(0.5)]
= 1 – (1 – 0.6915) = 0.6915
(c) P(6 < X < 14)
Ê 6 - 10 14 - 10 ˆ
PÁ <Z< = P( - 2 < Z < 2)
Ë 2 2 ˜¯
2.72 Probability Theory and Random Processes
2.85 An analog signal received at the detector (measured in mV) may be modelled as a Gaussian random
variable N(200, 256) at a fixed point in time. What is the probability that the signal will exceed 240 mV?
What is the probability that the signal is larger than 240 mV, given that it is lesser than 210 mV?
Solution Given: X ~ N(200, 256)
X - mX X - 200
Let Z = =
sX 16
= 1 – 0.9938 = 0.0062
P(X > 240| X > 210)
Ê 240 - 200 240 - 200 ˆ
= P ÁZ > Z> ˜¯
Ë 10 16
= P(Z > 2.5 | Z > 0.625)
1 - P( Z £ 2.5)
= (From Table 2.4)
1 - P( Z £ 0.625)
1 - f (2.5)
=
1 - f (0.625)
0.0062 0.0062
= = = 0.023
1 - 0.73 0.27
2.86 The annual rainfall (in hundreds of mm) of a certain region is normally distributed with m = 5 and
s = 2. What is the probability that starting with this year, it will take over 10 years before a year occurs
having a rainfall over 700 mm? What assumptions are you making?
Solution Given: mX = 5; sX = 2 and X = 7
X - mX 7 - 5
Z = = =1
sX 2
P[Z £ 1] = f(1) = 0.8413
Probability that starting from this year, it will take over 10 years before a year occurs having a rainfall over
700 mm is given by (0.8413)10. The assumption is that the rainfall in each year is independent of other.
2.87 A company conducts tests for job aspirants and passes them if they achieve a score of 500. If the
test scores are normally distributed with a mean of 485 and a standard deviation of 30, what percentage of
the aspirants pass the test?
Solution Given: X = 500 mX = 485
sX = 30
È 500 - 485 ˘
P[X ≥ 500] = P Í Z ≥ ˙
Î 30 ˚
Ê 1ˆ Ê 1ˆ
= P ÁZ ≥ ˜ = 1- P ÁZ < ˜
Ë 2¯ Ë 2¯
Ê 1ˆ
= 1 – f Á ˜ (from Table 2.4)
Ë 2¯
= 1 – 0.6915
= 0.3085
2.74 Probability Theory and Random Processes
2.88 The output of a light bulb is normally distributed with mean of 200 end foot candles and a
standard deviation of 60 end foot candles. Determine a lower specification limit such that only 4% of the
manufactured bulbs will be defective.
Solution Given: mX = 200; sX = 60
X - 2000
Let Z=
60
Ê x - 2000 ˆ Ê x - 2000 ˆ
PÁZ £ =fÁ = 0.04
Ë 60 ˜¯ Ë 60 ˜¯
Ê 2000 - x ˆ
fi fÁ = 0.96
Ë 60 ˜¯
2000 - x
= f -1 (0.96)
60
= 1.75 (From Tables 2.4)
x = 2000 – 60 ¥ 1.75
= 1895 fc
x = 1895 fc
2.89 In a test on 2000 electric bulbs it was found that the life of a particular make was normally distributed
with an average life of 2040 hours and standard duration of 60 hours. Estimate the number of bulbs likely
to burn for (a) more than 2150 hours, (b) less than 1950 hours, and (c) more than 1920 hours but less than
2160 hours.
Solution
(a) P(X ≥ 2150) = ?
X - m X - 2040
We have Z = =
s 60
(c) P(more than 1920 hours but less than 2160 hours)
P(1920 £ X £ 2160) = P(–2 £ Z £ 2)
= f(2) – f(–2)
= f(2) – [1 – f(2)]
= 2f(2) – 1
= 2(0.9712) – 1
= 0.9544
The number of bulbs with lifetime between 1920 hours and 2160 hours is
0.9544 ¥ 2000 1909
2.90 The average test marks in a class is 80. The standard deviation is 6. If the marks are distributed
normally, how many students in a class of 200 receives marks between 70 and 90.
Solution Let X represent the random variable that represents the marks of students
Given: mX = 80, sX = 6
Let X1 = 70 and X2 = 90
X1 - m X 70 - 80 -10
Z1 = = = = - 1.67
sX 6 6
X 2 - m X 90 - 80 -10
Z2 = = = = - 1.67
sX 6 6
P(Z1 < Z £ Z2) = P(–1.67 £ Z < 1.67)
= f(1.67) – f(–1.67)
= 2f(1.67) – 1
= 2(0.9525) – 1
= 0.905
The number of students who receive marks between 70 and 90 is 200 ¥ 0.905 = 181
2.91 The average life of a certain type electric bulb is 1200 hours. What percentage of this type of bulbs
is expected to fail in the first 800 hours of working? What percentage is expected to fail between 800 and
1000 hours? Assume a normal distribution with s = 200 hours.
Solution Given: mX = 1200, sX = 200
Ê 800 - 1200 ˆ
P(X £ 800) = P Á Z £ ˜¯ = P[ Z £ - 2]
Ë 200
= f(–2) = 1 – f(2) (from Table 2.4)
= 1 – 0.9772 = 0.0228
2.76 Probability Theory and Random Processes
= 2.28% of the bulbs are expected to fail in the first 800 hours.
P[800 £ X £ 1000] = P(–2 < Z £ –1) = f(–1) – f(–2)
= [1 – f(1)] – [1 – f(2)]
= f(2) – f(1) = 0.9772 – 0.8413
= 0.1359
= 13.59% of the bulbs are expected to fail between 800 and 1000 hours.
Practice Problems
2.36 The marks obtained by a number of students in mathematics is normally distributed with mean 65 and SD 5. Find
the probability that a student scores above 75. (Ans. 0.0228)
2.37 X is normally distributed and the mean of X is 12 and standard deviation is 4. Find the probability of the following:
(a) X ≥ 20 (b) 0 £ X £ 12 (c) Find x when P[X > x) = 0.24. (Ans. (a) 0.0228 (b) 0.49865 (c) 14.84)
2.38 The savings bank account of a customer showed an average balance of ` 150 and a standard deviation of ` 50.
Assuming that the account balance are normally distributed, (2006)
(a) What percentage of account is over ` 200?
(b) What percentage of account is between ` 120 and ` 170?
(c) What percentage of account is less than ` 75?
(Ans. (a) 15.87% (b) 38.11% (c) 6.68%)
2.39 A person riding two-wheeler travels on a highway with a mean speed of 60 km/h and a standard deviation of
4 km/h. (a) What is the probability that he travels at a speed between (a) 55 km/h and 65 km/h; (b) more than 65 km/h
(Ans: (a) 0.7888; (b) 0.0054)
2.40 The time required for a professor to evaluate 10 answer scripts follows a normal distribution with average time of
one hour and a standard deviation of 5 minutes.
(a) What is the probability that he will take less than 45 minutes?
(b) What is the probability that he will take more than 65 minutes?
(Ans.(a) 0.00135 (b) 0.15866)
0.2 0.05
0.15
0.1
px(x)
px(x)
0.1
0.15
0.05
0 0
0 5 10 0 5 10 15 20
x x
0.1
0.1
px(x)
px(x)
0.05
0.05
0 0
0 10 20 30 0 10 20 30 40
x x
Fig. 2.25 Illustrating how the pmf of binomial distribution approaches to normal distribution as n increases
Solved Problem
2.92 A pair of dice is rolled 900 times. Let X denote the number of times a total of 9 occurs.
Find P(90 £ X £ 110).
Solution Given: n = 900
4 1 8
The probability for 9 to occur when a pair of dice is rolled is given by p = = fiq=
36 9 9
Using DeMoivre–Laplace limit theorem,
1
np = (900) = 100
9
2.78 Probability Theory and Random Processes
Ê 10 ˆ Ê 10 ˆ
= fÁ - fÁ- = f(1.06 – f(–1.06)
Ë 9.429 ˜¯ Ë 9.428 ˜¯
= f(1.06) – [1 – f(1.06)] = 2f(1.06) – 1
= 2(0.8554) – 1
= 0.7108
ÔÏl e - l ( x - a ) x≥a
fX(x) = Ì (2.61)
ÔÓ 0 x<a
ÏÔl e - l x x≥0
For a = 0, fX(x) = Ì (2.62)
ÔÓ 0 x<0
The CDF is given by
x
FX(x) = P(X £ x) = Ú f X (u) du
-•
x
- l (u - a )
= Úl e du
a
1
0.9
0.8
l = 1; a = 1
0.7
0.6
fX(x) l = 0.5,
0.5 a=1
0.4
0.3
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
x
Fig. 2.26 Probability density function of Exponential Distribution
The Random Variable 2.79
x x
l el a - lu
= l e l a Ú e - lu du = - e
a
l
a
-la -l x -la
= e [e -e ] (2.63)
–l(x – a)
=1–e
For a = 0,
FX(x) = P{X £ x] = 1 – e–lx (2.64)
Now consider P(0) which is the probability of occurrences in an interval of T seconds. This is given by
P(0) = e–b = e–lT.
Thus, the probability that there is at least one occurrence in an interval of T seconds is 1 – e–lT.
For an exponential random variable Y with parameter l, the probability that an event occurs not later than
time T is given by
P(Y £ T) = FY(T) = 1 – e–lT
Thus, an exponential distributed Y with parameter l describes the interval between occurrence of events
defined by a Poisson random variable with parameter b.
Thus, the relationship between Poisson distribution and exponential distribution can be stated as follows:
If the number of occurrences has a Poisson distribution then the time between successive occurrences has
an exponential distribution.
Solved Problems
1
2.93 The time required to complete a work is an exponential distributed random variable with l = .
What is the probability that time to complete the work exceeds 2 hours? 2
Solution Given: l = 1
2
Let X be a random variable that represents the time to complete a work.
fX(x) = l e - l x for x £ 0
= 0 otherwise
Also FX(x) = 1 – e–lx
P[X > 2] = 1 – P[X £ 2]
= 1 – FX(2) = 1 – (1 – e–2l) = e–2l
Substituting the value of l, we get
P[X > 2] = e–1
1
2.94 The number of years a washing machine functions is exponentially distributed with l = . What
is the probability that it will be working after an additional 10 years? 10
= 1 – P[X £ 10]
= 1 – FX(10)
FX(x) = 1 – e–lx. Hence
1 – FX(10) = 1 – (1 – e–10l) = e–10l
1
For l = ,
10
P[X > (t + 10)|X > t] = e–1
2.95 If the number of kilometres that a car runs before its battery wears out is exponentially distributed
with an average value of 10000 km and if the owner desires to take a 5000 km trip, what is the probability
that he will be able to complete his trip without having to replace the car battery? Assume that the car has
been used for some time.
Solution Let X denote the random variable that represents kilometres that a car runs before its battery
wears out. X is exponentially distributed with an average value of 10000 km That is,
1
= 10000
l
fX(x) = le–lx, x ≥ 0
1
e - x /1000 , x ≥ 0
10000
The owner completes the trip without replacing the car battery if X > 5000.
•
Therefore, we have to find P(X > 5000). That is, Ú f X ( x ) dx
5000
•
1
P(X > 5000) = Ú e - x /1000 dx
5000
10000
x
Let = t , then dx = 10000 dt
10000
When x = 5000, t = 0.5
•
•
-t
P(X > 5000) = Úe dt = - e - t = e -0.5
0.5
0.5
= 0.6065
P( X > k )
2.96 If X is exponentially distributed with parameter l, find the value of k such that = a.
P( X £ k )
• •
-l x e- l x
P(X > k) = Ú l e dx = - l = - l ( - e - l k ) /l = e - l k
k
l
k
P(X £ k) = 1 – P(X > k) = 1 – e–lk
P( X > k ) e- lk
Given: = =a
P( X £ k ) 1 - e - l k
fi e–lk + a e–lk = a
e–lk (1 + a) = a
a
e–lk =
1+ a
Ê a ˆ
–lk = ln Á
Ë 1 + a ˜¯
Ê1 + aˆ
lk = ln Á
Ë a ˜¯
1 Ê1 + aˆ
k= ln
l ÁË a ˜¯
2.97 The mileage which car owners get with certain kinds of radial tyres is a random variable having
an exponential distribution with a mean of 40000 km. Find the probability that one of these tyres will last
(a) at least 20,000 km, and (b) at most 30,000 km
Solution Let X be the random variable that represents the mileage obtained with the tyre.
Given: l = 1
40000
fX(x) = le–lx = 1
We have e - x /40000 x > 0
40000
P(at least 20000 km) = P(X > 20000) = 1 – P(X £ 20000)
20000
1
P(X £ 20000) = Ú e - x /40000 dx
0
40000
x
Let = t fi dx = 40000 dt
40000
When x = 20000; t = 0.5
0.5
0.5
-t
P(X £ 20000) = Úe dt = - e - t = 0.3935
0
0
P(X > 20000) = 1 – 0.3935 = 0.6065
P(at most 30,000 km) = P(X < 30,000 km)
The Random Variable 2.83
30000
1
P(X < 30000) = Ú e - x /40000 dx
0
40000
0.75
= Ú e - t dt
0
= 1 – e–0.75 = 0.5270
2.98 Suresh has a car whose lifetime mileage is exponentially distributed with l = 1 . He has driven
20
the car for 20000 km and sold it to Ramesh. What is the probability that Ramesh would get at least
20000 km out of it? Repeat the above problem if the lifetime mileage is uniformly distributed over
(0, 40000) . Assume X is a random variable that represents mileage in thousand km.
Solution Since the exponential distribution has memoryless property, we have
P[X > (t + s)|X > s] = P[X > t]
Given s = 10 and t = 20
P[X > 30|X > 10] = P[X > 20]
P[X > 20] = 1 – P[X £ 20] = 1 – (1 – e–l(20))
Ê - Á ˜ .20 ˆ
Ê 1ˆ
= 1 - Á 1 - e Ë 20 ¯ ˜ = e -1
ÁË ˜¯
If X is a uniformly distributed random variable over (0, 40),
Ï1
Ô for 0 £ x £ 40
fX(x) = Ì 40
Ô0 otherwise
Ó
1 - FX (30)
P[X > 30|X > 10] =
1 - FX (10)
30 30
1 3
FX(30) = Ú f (u) du) = Ú 40 dx = 4
-• 0
20 10
1 1
FX(10) = Ú f (u) du = Ú 40 dx = 40
-• 0
1 - 3/4 1/4 1
P[X > 30 | X > 10] = = =
1 - 1/4 3/4 3
REVIEW QUESTIONS
16. Show that the inter-arrival times of a Poisson process with intensity l obeys an exponential
distribution.
2.84 Probability Theory and Random Processes
Practice Problems
2.41 The lifetime of a machine part is exponentially distributed with a mean of 400 hours.
(a) What is the probability that the machine part fails in less than 100 hours?
(b) What is the probability that the machine part works for more than 500 hrs before failure?
(c) If the machine part worked for 400 hours without failure, what is the probability of a failure in the next 100
hours?
(Ans. (a) 0.2212 (b) 0.2865 (c) 0.2212)
2.42 The time between arrivals of suburban trains at Chennai Central railway station is exponentially distributed with
a mean of 10 minutes:
(a) What is the probability that you wait longer than one hour for a train?
(b) Suppose you have already been waiting for one hour for a train, what is the probability that one arrives within
the next 10 minutes.
(Ans. (a) .0025 (b) 0.6321)
2.6.6 Rayleigh Distribution
The Rayleigh density and distribution functions are given by
Ï x - x 2 /2s 2
Ô e x≥0
fX(x) = Ì s 2 (2.68)
Ô 0 x<0
Ó
Rayleigh random variable is often a good model for measurement error and the amplitude or envelope of
a signal with two orthogonal components. It is used to represent the envelope of narrowband noise.
Solved Problem
Solution
(a) fX(x) ≥ 0 for all x, since fX(x) is defined for only positive values
• x2
- x 2 /2 Let =t
Consider Úxe dx 2
0 x dx = dt
•
• •
2
-x
Úxe
/2
dx = Ú e - t = e - t =1
0 0 0
x
(b) FX(x) = Ú fu (u) du
-•
x u2
- u2 /2 Let =p
= Úu e du 2
0 u du = dp
x 2 /2
x 2 /2
= - Èe - x - 1˘
2
= Ú e - p dp = - e - p /2
0 ÎÍ ˚˙
0
2
= 1 - e- x /2
REVIEW QUESTIONS
19. Sketch the probability density function and probability distribution function of (a) exponential
distribution, (b) uniform distribution, and (c) Gaussian distribution
2.86 Probability Theory and Random Processes
20. Define and explain the following density functions (a) Binomial (b) Exponential (c) Uniform
(d) Rayleigh.
21. Define Rayleigh density and distribution function and explain them with their plots.
= (a – 1) G(a – 1) (2.71)
Therefore, we can write
G(n) = (n – 1) G(n – 1) = (n – 1) (n – 2) G(n – 2)
= (n – 1) (n – 2) (n – 3) … G(1)
•
-x
Since G(1) = Úe dx = 1
0
1
where b = is known as rate parameter
l
The gamma pdf for different values of a and b are shown in Fig. 2.28.
Solved Problems
2.100 Show that the pdf of a gamma random variable integrates to one.
Let lx = p
dp
l dx = dp or dx =
l
•
l ( p)a -1 e - p dp
= Ú G (a ) l
0
2.88 Probability Theory and Random Processes
•
1 a -1
= Úp e - p dp
G (a ) 0
1
= G (a ) = 1
G (a )
2.101 The daily consumption of water in an apartment in excess of 10,000 litres is approximately
1
distributed as a gamma random variable with parameters a = 2 and l = . The apartment has a daily
5000
storage of 15,000 litres. What is the probability that the water is insufficient on a particular day?
Solution Let X denote the consumption of water in a day. Then Y = X – 10000 denotes the water in excess
1
of 10000 litres which follows a gamma distribution with a = 2 and l = .
5000
The pdf of Y is given by
y e - y /5000
2
Ê 1 ˆ 1
fY(y) = Á = y e - y /5000
Ë 5000 ˜¯ G (2) (5000)2
Since daily storage of water is 15000 litres, the water is insufficient on a particular day if the consumption
excesses 15000 litres. So the required probability is
P(X > 15000) = P(Y > 5000)
• •
y e - y /5000
= Ú fY ( y) dy = Ú (5000)2
dy
5000 5000
y
Let = t fi dy = 5000 dt
5000
when y = 5000; t = 1 and when y = •; t = •
• •
(5000 t ) e - t 5000 (dt )
P(Y > 5000) = Ú = Ú t e - t dt
1 (5000)2 1
• •
-t -t -1 -1
= - te -e =e +e = 2e -1
1 1
2.102 In a post office, the time between constant visit is exponentially distributed with a mean of 5
minutes. What is the probability that the time between the arrival of the third customer and the arrival of
the sixth customer is greater than 15 minutes?
l (l x )a -1 e - l x
Solution We have gamma distribution fX(x) =
G (a )
1
The rate at which the customer visits the post office is 5 minutes. That is, l =
5
Relative to the third customer, the sixth customer is the third arrival. Therefore a = 3.
The Random Variable 2.89
•
1
= Úx
2
e - x /5 dx G (3) = 2!
53 2! 15
x
Let = t fi dx = 5 dt
5
• •
1 2 -t (5)3
P(X > 15) = Ú (5t ) e 5 dt = Út
2
e - t dt
53 2! 3
250 3
1 È 2 -t -t -t
•˘
=
2 ÍÎ(-t e - 2te - 2e ) 3 ˙˚
1
= [9 + 6 + 2] e -3 = 0.4231
2
2.103 The time interval between calls at a certain phone booth is exponentially distributed with a mean
of 3 minutes. A arrived at the booth, while B was using the phone. He observed that nobody was present in
the queue. Also, B spent 2 minutes on the call before A arrived. By the time B finished his conversation,
more than 4 people were waiting behind A. What is the probability that the time between the instant A starts
using the phone and the time the third person behind A starts his call is greater than 10 minutes?
1
Solution Let X be the time interval between calls. Given X is exponentially distributed with l =
3
Also, a = 3
l 3 x 2 e - x /3
fX(x) = 0< x<•
G (3)
3
Ê 1 ˆ 2 - x /3
ÁË 3 ˜¯ x e
•
P(X > 10) = Ú dx
10
G (3)
•
1
= Úx
2
e - x /3 dx
(3)3 2 10
x
Let = t fi dx = 3 dt
3
•
1 1È • ˘
P(X > 10) = Ú t 2 e -1 dt = Í( -t 2 e - t - 2te - t - 2e - t ) 10 ˙
2 10/3 2 ÍÎ 3 ˙
˚
2.90 Probability Theory and Random Processes
1ÈÊ 10 ˆ 2 Ê 10 ˆ ˘
= ÍÁ ˜ + 2 Á ˜ + 2 ˙ e -10/3
Ë 3¯ Ë 3¯
ÎÍ ˙˚
2
= 0.353
Practice Problem
2.43 In a service centre, a TV is repaired by a person at an average rate of one TV for every three days with exponential
distribution. What is the probability that four TVs can be repaired within 5 days? (Ans. 0.265)
Solved Problems
2.104 The life time of a semiconductor diode is a random variable having a Weibull distribution with
the parameters a = 0.025 and b = 0.5. What is the probability that the diode will still be in operating
condition for more than 4000 hours.
Solution Given: X is Weibull random variable
b
fX(x) = ab xb–1 e–ax , x > 0, a > 0 b > 0
a = 0.025 and b = 0.5
fX(x) = 0.0125 x–1/2 e–0.025 x1/2
•
1/ 2
P(X > 4000) = Ú (0.0125) x -1/2 e -0.025 x dx
4000
Let 0.025x1/2 = p
1
(0.025) x -1/2 dx = dp; when x = 4000, p = 1.58
2
•
•
P(X > 4000) = Ú e - p dp = - e - p
1.58
1.58
= e–1.58 = 0.2057
2.105 An electronic circuit that consists of 6 transistors has a life length (in years) can be considered as
a random variable that follows Weibull distribution with parameters a = 20 and b = 1.5. If these transistors
functions independent of one another what is the probability that no transistor will have to be replaced
during the first 3 months of service.
Solution Given: a = 20, b = 1.5
b
b - 1 -a x
fX(x) = ab x e x>0
1.5
= 30 x1/2 e -20 x
•
Ê 1ˆ 1.5
P Á X > ˜ = Ú 30 x1/2 e -20 x dx Let 20x1.5 = p
Ë 4 ¯ 1/4
1
when x = , p = 2.5 30x1/2 dx = dp
4
•
Ê 1ˆ •
P Á X > ˜ = Ú e - p dp = - e - p
Ë 4 ¯ 2,5 2.5
= e–2.5 = 0.0821
2.106 Find the probability of failure-free performance of a dc motor over a period of 5000 hours. The
life expectancy of the motor is defined by Weibull distribution with parameters a = 10–7 and b = 2.
2.92 Probability Theory and Random Processes
Solution
The life expectancy of the motor is defined by Weibull distribution with a = 10–7 and b = 2
b -7
x2
We have f X ( x ) = ab x b -1 e - ax = 2 ¥ 10 -7 x e -10
P(failure-free performance of a dc motor)
= 1 – P(X < 5000)
5000
2
= 1- Ú 2 ¥ 10 -7 x e -a x dx
0
5000
-7 2
= 1- Ú 2 ¥ 10 -7 x e -10 x
dx
0
Let 10–7 x2 = t
2 ¥ 10–7 x dx = dt
Also at x = 5000; t = 2.5
P(failure-free performance of a dc motor)
2.5
-t
= 1- Úe dt
0
2.5
= 1 - (-e- t )
0
–2.5
=e = 0.08208
Practice Problems
2.44 If the life of a semiconductor is a random variable having a Weibull distribution with parameters a = 0.01 and b =
0.5, what is the probability that the semiconductor will be in operating condition after 5000 hours? (Ans. e–0.707)
2.45 The lifetime of a component measured in hours is Weibull distributed with parameters a = 10–3 and b = 2. Find the
probability that such a component will last more than 100 hours. (Ans. e–0.1)
REVIEW QUESTIONS
22. Write expression for pdf of
(a) Gamma random variable
(b) b random variable
(c) Cauchy random variable
23. Prove that G(n) = (n – 1)!
2.94 Probability Theory and Random Processes
The log-normal random variable is a good model for problems involving random effects that are Gaussian
when measured in a logarithmic scale. The probability distribution function of X is
1
F(x) = [1 + erf (u)] (2.82)
2
1 Ê xˆ
where u = ln Á ˜ (2.83)
2s Ë x¯
1
If mean m = x = 0 and a = then the density function becomes
l
a -a x
f(x) = e (2.85)
2
The Laplace density function for various values of l is shown in Fig. 2.33.
Figure 2.34 shows the pdf of a Chi-square distributed random variable with different degrees of
freedom.
Chi-square random variables are used in the Chi-square test which is popular among many statistics
tests. This test provides a measure to judge if a random variable contradicts its underlying assumption made
regarding its distribution.
2.96 Probability Theory and Random Processes
(r 2 + l )
r - Ê lr ˆ
f(r) = e 2s 2 I0 Á 2 ˜ , r ≥ 0 (2.88)
2
s Ë s ¯
is known as Rice density function.
The function I0(x) is the zero-order modified Bessel function given by
2x •
1 x2n
I0(x) =
2p Úe
x cosq
dq = Â 22 n (n!)2 (2.89)
0 n=0
REVIEW QUESTIONS
28. Write the pdf of the following distributions:
(a) Log-normal
(b) Laplace
(c) Chi-sqhare
29. For what type of applications are the following distribution are used? (a) Log-normal (b) Chi-
square.
CONDITIONAL DISTRIBUTION
AND DENSITY FUNCTION 2.7
In Chapter 1, we studied the concept of conditional probability. It is defined as follows. The conditional
probability for the event A given the event B with P(B) π 0 is defiend as
P ( A « B)
P ( A | B) = (2.91)
P ( B)
We will extend this concept to distribution FX(x) and densities fX(x)
2.98 Probability Theory and Random Processes
x
(iii) FX(x | A) = Ú f X (s | A) ds (2.102)
-•
x /2
(iv) P[x1 < X £ x2 | A] = Ú f X ( x | A) dx (2.103)
x1
P(b1 < X £ b2 )
FX(x | A) = =1 x ≥ b2
P(b1 < X £ b2 )
(b) b1 < x £ b2, then {X £ x, b1 < x £ b2} = {b1 < X £ x}
P(b1 < X £ x )
FX(x | A) =
P(b1 < X £ b2 )
FX ( x ) - FX (b1 )
= b1 £ x < b2
FX (b2 ) - FX (b1 )
(c) x £ b1, then {X £ x, b1 < X £ b2} = {f}
P(f )
FX(x | A) = = 0, x < b1
P(b1 < X £ b2 )
The corresponding density function is
fX ( x)
fX(x | A) = for b1 £ x < b2
FX (b2 ) - FX (b1 )
REVIEW QUESTIONS
30. Define conditional distribution and density function.
31. Explain the properties of conditional distribution.
32. Explain different methods of defining conditioning event.
Solved Problems
2.107 The random variable X is Poisson with a parameter b. Find the conditional pmf of X given A =
[X is even].
Solution The pmf of the Poisson random variable is
x -b
PX(x) = b e x = 0, 1, 2
x!
The CDF of X is given by
k
br e - b
FX(k) = Â r! x x2
r =0 ex = 1 + + +
A = [X = even] 1! 2!
x x2
The probability of A is e- x = 1 - +
1! 2!
•
br e - b e x + e- x
P(A) = P(X = even) = Â r! =1+
x2 x4
+
r =0 2 2! 4!
(r = even)
•
b2r e - b •
b2r
= Â (2 r )!
= e-b  (2r )!
r =0 r =0
The Random Variable 2.101
-b
È b2 b 4 ˘
P(A) = e Í1 + + + ˙
ÎÍ 2! 4! ˚˙
È b
-b e + e
-b ˘
1 + e -2 b
= e Í ˙=
ÎÍ 2 ˚˙ 2
If x is even, [X = x] Ã A and [X = x] « A = [X = x]
If x is odd, [X = x] « A = f. Hence
2e - b b x
PX[x | A] = x even
(1 + e -2 b ) x !
P(f )
= = 0; x odd
P ( B)
2.108 A discrete random variable X has the probability mass function given below.
x 0 1 2 3 4 5 6
2 2
PX(x) 0 a 2a 2a a 3a 2a2
Find (a) the value of a, (b) P[X < 5], P[X ≥ 4], P[0 < X < 4], (c) the distribution function of X.
Solution
(a) We know  PX ( x) = 1
x
fi 0 + a + 2a + 2a + a2 + 3a2 + 2a2 = 1
6a + 5a – 1 = 0 fi 6a2 + 6a – a – 1 = 0
2
6a(a + 1) – 1 (a + 1) = 0
(6a – 1) (a + 1) = 0
1
fi a=
6
(b) P[X < 5] = P(X = 0) + P(X = 1) + P(X = 2) + P(X = 3) + P(X = 4)
2
Ê 1ˆ 5 1 5 1 + 30 31
= a2 + 5a = Á ˜ + = + = =
Ë 6¯ 6 36 6 36 36
P[X ≥ 4] = P(X = 4) + P(X = 5) + P(X = 6)
a2 + 3a2 + 2a2 = 6a2
2
Ê 1ˆ Ê 1ˆ 1
6a2 = 6 Á ˜ = 6 Á ˜ =
Ë 6¯ Ë 36 ¯ 6
P(0 < X < 4) = P(X = 1) + P(X = 2) + P(X = 3)
5
= a + 2a + 2a = 5a =
6
2.102 Probability Theory and Random Processes
(c) Distribution of X
X 0 1 2 3 4 5 6
FX(x) = 1 1 5 31 34
PX(X £ x) 0 a= 3a = 5a = a 2 + 5a = 4 a 2 + 5a = =1
6 2 6 36 36
2.109 The daily consumption of milk in excess of 20,000 gallons is approximately exponentially
distributed with q = 3000. The city has a daily stock of 35,000 gallons. What is the probability that of two
days selected at random, the stock is insufficient for both days?
Solution Let X denote the daily consumption of milk and Y denote the excess amount of milk consumed
in a day.
Then we can write Y = X – 20000
The random variable Y follows exponential distribution with q = 3000. Therefore,
1
fY(y) = e - y /3000
3000
When X > 35000 then the stock is insufficient, Therefore, the probability that the stock is insufficient for one
day = P[X > 35000)]
P[X > 35000] = P[Y + 20000 > 35000]
•
= - e - y /3000 = e -5
15000
The probability that of two days selected at random, the stock is insufficient for both days
= e -5 (e -5 ) = e -10
2.110
The density function of a mixed random variable X is given by
-x
fX(x) = k {e [u(- x - 1) - u(- x - 4)] + e {u( x - 1) - u( x - 4)}
x
1
+ [d ( x + 2) + d ( x ) + d ( x - 2)]
4
Find k and FX(x).
Solution The probability density function fX(x) is sketched as shown in Fig. 2.37.
The Random Variable 2.103
fX(x)
–4 –3 –2 –1 0 1 2 3 4 x
Fig. 2.37
The pdf is defined over the interval [–4, 4]. Therefore, to evaluate k, we integrate fX(x) for the above
interval .
4 ÏÔ -1 4 ¸Ô 1
Ú f X ( x ) dx = k Ì Ú e x dx + Ú e - x dx ˝ + Ú d ( x + 2) dx + 1 d ( x ) dx + 1 d ( x + 2) dx
-4 ÔÓ -4 1 Ô˛ 4 4Ú 4Ú
1 1 1
= k {e -1 - e -4 - e -4 + e -1} + + +
4 4 4
3
= 2 k [e -1 - e -4 ] +
4
•
We know Ú f X ( x ) dx = 1
-•
3
fi 2 k [e -1 - e -4 ] + =1
4
1
= 0.3576 (e–1 – e–4) +
4
1 1 3
= + =
8 4 8
(e) For 0 < x £ 1,
3 1 5
FX(x) = + =
8 4 8
For 1 < x £ 2,
x
5
+ 0.3576 e - x dx
8 Ú1
FX(x) =
5
= + 0.3576 (e -1 - e - x )
8
For 2 < x £ 4,
5 1
FX(x) = + 0.3576 (e -1 - e - x ) +
8 4
For x > 4,
5 1
FX(x) = + 0.3576 (e -1 - e - x ) + = 1
8 4
2.111 A purse contains nine ` 10 notes and one ` 100 note. Let X be the random variable that represents
the total amount that results when two notes are drawn from the purse without replacement
(a) Describe the underlying sample space S.
(b) Find the probability for various values of X.
Solution Since the second draw is performed without replacement, we will be left with 9 notes before the
second draw. The sample space for the event is given in tabular form as below:
II draw (9 notes)
I draw (10 notes) 10 10 10 10 10 10 10 10 10 100
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
The Random Variable 2.105
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
10 20 20 20 20 20 20 20 20 X 110
100 110 110 110 110 110 110 110 110 110 X
9(8) 72 4
P(X = 20) = = =
10 ¥ 9 90 5
2¥9 1
P(X = 110) = =
10 ¥ 9 5
Practice Problem
Solved Problems
2.112 Two balls are randomly chosen from an urn containing 10 white, 3 black and 2 green balls.
Suppose that we win ` 10 for each black ball selected and we lose ` 5 for each white ball selected. Let X
denote our winning.
Find the probability for different possible values of X. Plot CDF.
Solution
Case 1: P(Both balls are black)
When both balls are black, we will win ` 20. Therefore,
Ê 3ˆ
ÁË 2˜¯ 3
P(X = 20) = =
Ê 15ˆ 105
ÁË 2 ˜¯
Ê 10ˆ Ê 3ˆ
ÁË 1 ˜¯ ÁË 1˜¯ 30
P(X = 5) = =
Ê ˆ
15 105
ÁË 2 ˜¯
2.106 Probability Theory and Random Processes
Ê 3ˆ Ê 2ˆ
ÁË 1˜¯ ÁË 1˜¯ 6
P(X = 10) = =
Ê 15ˆ 105
ÁË 2 ˜¯
X –10 –5 0 5 10 20
45 20 1 30 6 3
pdf
105 105 105 105 105 105
fX(x)
1 fX(x) 1.0
0.75 0.91
0.5 0.62
0.43
0.25
x
–10 –5 0 5 10 20 –10 –5 0 5 10 20
(a) pdf (b) CDF
Fig. 2.38
The Random Variable 2.107
1
fi C= = 1.13
4(1 - e -0.25 )
Ï - x /4
, 0 £ x <1
\ fX(x) = ÔÌ1.13 e
ÔÓ 0 otherwise
0.5
0.5 x
-
- x /4
FX(0.5) = Ú 1.13 e dx = -1.13 (4) e 4 = 0.5311
0 0
Solution
P(2.5 < Y £ 6.2) = FY(6.2) – FY(2.5)
= (1 - e -0.4 6.2
) - (1 - e -0.2 2.5
)
= e -0.4 2.5
- e -0.4 6.2
Find the probability: (a) P(-• < x £ 6.5) (b) P(X > 4) (c) P(6 < X £ 9)
2.108 Probability Theory and Random Processes
1 n(n + 1)(2 n + 1)
= = 0.14
650 6 n=6
1 n(n + 1) (2 n + 1)
=1–
650 6 n=4
= 0.954
P(6 < X £ 9) = FX(9) – FX(6)
9
n2 1 È 9 ¥ 10 ¥ 19 6 ¥ 7 ¥ 13 ˘
= Â 650 = 650 ÍÎ 6
-
6 ˙
˚
n=6
1
= [285 - 91] = 0.2985
650
2.116 The diameter of an electric cable is assumed to be a continuous random variable X with pdf
fX(x) = 6x(x – 1) 0 £ x £ 1
(a) Verify that fX(x) is a valid density function.
(b) Find P(0 £ X £ 0.5).
Solution The pdf of random variable X is given by
fX(x) = 6x(x – 1) 0 £ x £ 1
(a) For a valid pdf,
•
Ú f X ( x )dx = 1
-•
1
1
È x3 x2 ˘
That is, Ú 6 x ( x - 1) dx = 6 ÍÍ 3 - 2 ˙˙
0 Î ˚0
È1 1 ˘
= 6Í - ˙
Î3 2 ˚
The Random Variable 2.109
Ú (x - x
2
= 6 ) dx
0
0.5
È x2 x3 ˘
= 6Í - ˙
ÎÍ 2 3 ˙˚
0
È (0.5)2 (0.5)3 ˘
= Í - ˙ = 0.5
ÎÍ 2 3 ˚˙
\ P(0 £ X £ 0.5) = 0.5
x
2.117 The continuous random variable X has pdf f X ( x ) = , 0 £ x £ 2. Two independent determinations
2
of X are made. What is the probability that both of these determined will be greater than one? If three
determinations had been made, what is the probability that exactly two of these are larger than one?
x
Solution Given f X ( x ) = ,0£x£2
2
Let X1 and X2 be two independent determinations.
2 2
x 1 x2 3
P(X1 > 1) = Ú dx = =
1
2 2 2 4
1
2 2
x 1 x2 3
P(X2 > 1) = Ú dx = =
1
2 2 2 4
1
Ê 3ˆ Ê 3ˆ 9
P(X1 > 1, X2 > 1) = Á ˜ Á ˜ =
Ë 4¯ Ë 4¯ 16
Let X1, X2 and X3 be three independent determinations. We can find that
1 1
x x2 1
P(X < 1) = Ú 2 dx = 4 =
4
0 0
9 9 9 27
= + + =
64 64 64 64
2.118 The continuous random variable X has pdf fX(x) = 3x2, –1 £ x £ 0. If b is a number satisfying
–1 < b < 0, compute P(X > b/X < b/2).
Solution
x
FX(x) = Ú f X (u) du
-•
x
x
Ú 3u du = u3 = x3 + 1
2
=
-1
-1
FX(x) = x3 + 1
È b˘
P Í X > b, X <
Î 2 ˙˚
P(X > b| X < b/2] =
È b˘
P ÍX < ˙
Î 2˚
3
Ê bˆ Ê bˆ
FX Á ˜ - FX (b) Á ˜ + 1 - b3 - 1
Ë 2¯ Ë 2¯
= =
Ê bˆ Ê bˆ
3
FX Á ˜
Ë 2¯ ÁË 2 ˜¯ + 1
- 7b3
= -
b3 + 8
• •
1Ê 1 ˆ 1 1 Èp Ê p ˆ ˘
Ú Á 2˜
dx = tan -1 x = Í - ÁË - ˜¯ ˙ = 1
-•
p Ë1+ x ¯ p -• p Î2 2 ˚
The Random Variable 2.111
1
So, fX(x) = is a valid pdf
p (1 + x 2 )
1
(b) fX(x) = ( x - 1)3 if 1 £ x £ 3
4
• 3 3
1 1 ( x - 1)4
Ú f X ( x )dx = Ú ( x - 1)3 dx =
4 1 4 4
-• 1
4
2
= =1
(4)2
So, fX(x) is a valid pdf.
2.120 A random variable X has a pdf fX(x) = a2xe–ax u(x), where a is a constant. Find CDF and
evaluate
Ê 1ˆ È1 2˘
P Á X £ ˜ and P Í £ X £ ˙ .
Ë a¯ Îa a˚
È xe - ax
e - ax
1 ˘
a 2 Í- - 2
+ ˙
ÎÍ a a a 2 ˚˙
= –axe–ax – e–ax + 1
= 1 – (ax + 1) e–ax
Ê 1ˆ Ê 1ˆ
P Á X £ ˜ = FX Á ˜ = ÈÎ1 - (ax + 1) e - ax ˘˚ 1
Ë a¯ Ë a¯ x=
a
–1 –1
= 1 – (1 + 1)e = 1 – 2e = 0.264
Ê1 2ˆ Ê 2ˆ Ê 1ˆ
PÁ £ X £ ˜ = FÁ ˜ - FÁ ˜
Ëa a ¯ Ë a ¯ Ë a¯
Ê 2ˆ
F Á ˜ = ÈÎ1 - (ax + 1) e - ax ˘˚ 2 = 1 - 3e -2
Ë a¯ x=
a
2.112 Probability Theory and Random Processes
Ê1 2ˆ
fi P Á £ X £ ˜ = (1 - 3e -2 ) - (1 - 2e -1 )
Ëa a¯
= 2e–1 – 3e–2 = 0.330
2.121 For a random variable with binomial distribution, show that for 0 £ x < n,
(n - x ) p
PX(x + 1)/PX(x) =
( x + 1)(1 - p)
Using the above result, show that
(a) PX(x + 1) > PX(x) if x < np – (1 – p)
(b) PX(x + 1) = PX(x) if x = np – (1 – p)
(c) PX(x + 1) < PX(x) if x > np – (1 – p)
Solution
PX(x) = Ê ˆ p x (1 - p)n - x
n
We know
ÁË x ˜¯
Ê n ˆ x +1
PX(x + 1) = Á p (1 - p)n - x -1
Ë x + 1˜¯
Ê n ˆ x +1
ÁË x + 1˜¯ p (1 - p)n - x -1
PX ( x + 1) n! x !(n - x )! p
= = ◊
PX ( x ) Ê nˆ x n- x ( x + 1)! ( n - x - 1)! n! 1- p
ÁË x ˜¯ p (1 - p)
Ê n - xˆ Ê p ˆ
= Á
Ë x + 1 ˜¯ ÁË 1 - p ˜¯
Given:
PX ( x + 1)
(a) PX(x + 1) > (PX(x) fi >1
PX ( x )
Ê n - xˆ Ê p ˆ
Hence, Á > 1; (n - x ) p > ( x + 1) (1 - p)
Ë x + 1 ˜¯ ÁË 1 - p ˜¯
np – xp > x + 1 – xp – p
np > x + 1 – p
np – 1 + p > x
fi x < np – (1 – p)
PX ( x + 1)
(b) Given: PX(x + 1) = PX(x) fi =1
PX ( x )
fi (n – x)p = (x + 1) (1 – p)
fi x = np – (1 – p)
The Random Variable 2.113
PX ( x + 1)
(c) Given: PX(x + 1) < PX(x) fi <1
PX ( x )
fi k > np – (1 – p)
Ê 20ˆ Ê 20ˆ
= 1 – Á ˜ (0.05)0 (0.95)20 - Á ˜ (0.05)1 (0.95)19
Ë 0¯ Ë 1¯
= 1 – 0.3589 – 0.3773 = 0.263
(b) l = np = 20(0.05) = 1
P(X > 1) = 1 – P(X = 0) – P(X = 1)
-11 0 -1
= 1 - e (1) - e (1)
0! 1!
= 1 – e–1 – e–1 = 0.264
2.123 An urn contains 3 white and 3 black balls. We randomly chose 2 balls. If 1 of them is white and
1 is black, we stop. If not, we replace the balls in the urn and again randomly select 2 balls. This continues
until 1 of the 2 chosen is white. What is the probability that we shall make exactly n selections.
Ê 3ˆ
One white ball can be selected from 3 white balls in Á ˜ ways.
Ë 1¯
Ê 3ˆ
Similarly, one black ball is selected from 3 black balls is Á ˜ ways.
Ë 1¯
Ê 3ˆ Ê 3ˆ
ÁË 1˜¯ ÁË 1˜¯ 3 ¥ 3 3
P(selecting 1 black and 1 white ball) = = =
Ê 6ˆ 15 5
ÁË 2˜¯
2.114 Probability Theory and Random Processes
2.124 A salesman is paid ` 100 for each sale he makes. The probability that he sells the product in a
sale is 0.75.
(a) What is the probability that he earned his third ` 100 on the fifth sale he made?
(b) If he made five sales per hour, what is the probability that he earned ` 900 in two hours?
Solution The probability that the salesman sells the product is 0.75. Let X denote the number of calls up
to and including kth success. The pmf of X is given by
Ê x - 1ˆ k
pX(x) = Á p (1 - p) x - k
Ë k - 1˜¯
Ê x - 1ˆ x = k , k + 1....
= Á ˜ (0.75)k (0.25) x - k :
Ë k - 1¯ k = 1, 2..., x
(a) The probability that the salesman earned his third ` 100 on the fifth sale is
Ê 5 - 1ˆ
PX(x = 5) = Á (0.75)3 (0.25)2 = 0.1582
Ë 3 - 1˜¯
(b) If he made five sales per hour then the total number of sales in 2 hours is 10. He earns ` 900 if he
made 9 sales. That is he sells the product 9 times in 10 sales. Therefore, it is a binomial distribution
of 9 successes in 10 trials.
Ê 10ˆ
That is p = Á ˜ (0.75)9 (0.25)1 = 0.1877
Ë 9¯
2.125 A company invited applications to fill three vacancies. Out of 20 applications received, five
applicants were found to be qualified for the post. It is observed from past experience that 25% applicants
who were offered this kind of post rejected the offer. If the company ranks the selected five applicants
according to their performance, what is the probability that fifth ranked applicant will be offered one of the
positions?
Solution The probability that an applicant offered a job actually joins is 0.75
Let X be a random variable that denotes the number of candidates offered a job upto and including the kth
candidate to accept the job. Then the pmf is given by
Ê n - 1ˆ k = 1, 2, ... n
pX ( x ) = Á ˜ (0.75)k (0.25)n - k
Ë k - 1¯ n = k , k = 1,
The probability that the fifth ranked applicant will be offered one of the three positions is the probability
that the fifth candidates is either the first, second or third candidate to accept a job. Therefore, the required
probability is given by
Ê 4ˆ Ê 4ˆ
P = (0.75) (0.25)4 + Á ˜ (0.75)2 (0.25)3 + Á ˜ (0.75)3 (0.25)2 = 0.1963
Ë 1¯ Ë 2¯
The Random Variable 2.115
2.126 A multiple-choice examination has 10 problems, each of which has four possible answers. What
is the probability that Hari will get five or more correct answers by just guessing?
Solution Given that each multiple choice questions has four possible answers, therefore, the probability
1
of getting a correct answer by just guessing is p =
.
4
Let X be a random variable that denotes the number of questions Hari answers correctly from 10
questions.
The probability that X is greater than or equal to five is
P(X ≥ 5) = P(X = 5) + (P(X = 6) + P(X = 7) + P(X = 8) + P(X = 9 + P(X = 10)
5 5 6 4 7 3
Ê 10ˆ Ê 1ˆ Ê 3ˆ Ê 10ˆ Ê 1ˆ Ê 3ˆ Ê 10ˆ Ê 1 ˆ Ê 3 ˆ
= Á ˜
Ë 5¯ ÁË 4 ˜¯ ÁË 4 ˜¯ + ÁË 6 ˜¯ ÁË 4 ˜¯ ÁË 4 ˜¯ + ÁË 7 ˜¯ ÁË 4 ˜¯ ÁË 4 ˜¯
8 2 9 10
Ê 10ˆ Ê 1 ˆ Ê 3 ˆ Ê 10ˆ Ê 1 ˆ Ê 3 ˆ Ê 10ˆ Ê 1 ˆ
+ Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜ +Á ˜Á ˜
Ë 8 ¯ Ë 4¯ Ë 4¯ Ë 9 ¯ Ë 4 ¯ Ë 4 ¯ Ë 10¯ Ë 4 ¯
= 0.0584 + 0.0162 + 0.00309 + 3.86 ¥ 10–4 + 2.86 ¥ 10–5 + 9.536 ¥ 10–7
= 0.0781
2.127 In a state, studies indicate that 20% of marriages end in divorce. If divorces between couples are
independent of each other, find the probability
(a) that exactly 4 of 12 couples will stay married
(b) that only Ram and Geetha will stay married
Solution
(a) The probability that a marriage ends in divorce is 0.2. Therefore, we can say that the probability
that a couple stays married is 0.8. Let X denote the number of couples that stay married. Then the
probability that exactly 4 out of 12 couples will stay married is given by
Ê 12ˆ
P(X = 4) = Á ˜ (0.8)4 (0.2)8 = 0.00052
Ë 4¯
(b) The probability that Ram and Geetha only stay married whereas the remaining eleven couples get
divorced is given by
P = (0.2)11 (0.8) = 1.64 ¥ 10–8
2.128 An urn contains B black balls and W white balls. A ball is randomly selected from the urn, its
colour is noted and the ball is put back into the urn. The process is repeated until a black ball is selected.
(a) What is the probability that the experiment stops after n trials?
(b) What is the probability that the experiment requires at least k trials before it stops?
Let X be a random variable that denotes the number of trials until a black ball is selected. The pmf of X
is given by
pX(x) = pqn–1 n = 1, 2, …
The probability that the experiment stops exactly after n trials is
n -1
B Ê B ˆ
P ( x = n) = n = 1, 2, ...
B + W ÁË B + W ˜¯
The probability that the experiment requires at least k trials before it stops is given by
P(X ≥ k) = 1 – P(X < k)
k -1
= 1- Â p q n -1
n =1
k -1
= 1- Â p (1 - q)n -1
n =1
= 1 – p{1 + (1 – p) + … (k – 1) terms}
ÔÏ 1 - (1 - p) Ô¸
k -1
= 1- p Ì ˝
ÓÔ 1 - (1 - p ˛Ô
k -1
= (1 - p)k -1 = q k -1 = Ê W ˆ
ÁË B + W ˜¯
REVIEW QUESTIONS
33. Define random variable and explain the concept of random variable.
34. What are the conditions for a function to be a random variable?
35. Define discrete and continuous random variables with suitable examples.
36. Define distribution function of a random variable.
37. Define probability mass function.
38. Explain the properties of distribution function.
39. What are the important conditions that a pdf should satisfy?
40. Define pdf of continuous random variable X.
41. What are the values of FX(–•) and FX(•)?
42. List and explain the properties of discrete probability density function.
43. Explain the relation between CDF and pdf.
44. If X is a continuous random variable prove that P(X = C) = 0.
45. What are binomial density and distribution functions?
46. What is Poisson random variable? Explain in brief.
47. When a random variable X is said to have uniform distribution.
48. What is Gaussian random variable? Explain it significance?
49. Draw pdf and CDF Curve of Gaussian random variable.
50. Derive an expression for the error function of the standard normal random variable.
The Random Variable 2.117
EXERCISES
Problems
1. If X is a discrete random variable having the probability distribution
X=x –2 –1 0 1 2
P(X = x) k k/2 2k k k/2
Find P(X £ 1)
2. If X is a discrete random variable having the pmf
X 1 2 3 4
p(x) k 2k 3k/2 k/2
Find P(2 £ X < 3).
3. After a coin is tossed two times, if X is the number of heads, find the probability distribution of X.
Ïkx, x = 1, 2, 3, 4, 5
4. If P(X = x) = Ì
Ó0 otherwise
represents pmf of a random variable X, find
(i) k (ii) P(X being a prime number) (iii) P(1/2 < X < 5 | X > 1)
Ê 1 11 1ˆ
ÁË Ans : (i) 15 (ii) 15 (iii) 7 ˜¯
6. Consider the experiment of tossing four fair coins. The random variable X is associated with the
number of tails showing. Compute and sketch CDF of X.
7. The CDF of a random variable X is given by
ÏÔ 0 x£0
FX ( x ) = Ì -x
ÔÓC (1 - e ), x > 0
2.118 Probability Theory and Random Processes
Ê ÏÔ 0 x £ 0ˆ
Find fX(x). Á Ans : f X ( x ) = Ì - x ˜
Ë ÓÔC e , x > 0¯
8. Determine whether the following is a valid distribution function
ÏÔ1 - e - x for x ≥ 0
FX ( x ) = Ì
ÔÓ 0 elsewhere
9. Consider the pdf of a random variable X given by
fX(x) = a e–b|x|. –• < x < •.
Ê Ï a bx ˆ
Á ÔÔ b e for x £ 0˜
Find CDF o X. Á Ans : FX ( x ) = Ì ˜
Á Ô a (2 - e - bx ) for x ≥ 0˜
ÁË ÔÓ b ˜¯
16. The random variable X has a binomial distribution with n = 5 and p = 0.6.
Find (i) P(X = 2) (ii) P(X £ 2) (iii) P(1 £ X £ 3).
The Random Variable 2.119
17. In electronic laboratory it is found that 5% of voltmeters are defective. A random sample of
10 voltmeters are taken for inspection. (i) what is the probability that all are good (ii) at most there
are two defective voltmeters.
18. In a country the suicide rate is 2 per one lakh people per month. Find the probability that in a city of
population 10,00,000 there will be at most two suicides in a month.
19. The number of errors in a textbook follow a Poisson distribution with a mean of 0.05 error per page.
What is the probability that there are five or less errors in 50 pages?
20. The number of telephone calls that arrive at a telephone exchange is modelled as a Poisson random
variable with 10 average calls per hour.
(a) What is the probability that there are three or less calls in one hour?
(b) What is the probability that there are exactly 10 calls in one hour?
21. If the probability that a student pass the examination equals 0.6. What is the probability that he needs
more than 3 attempts before he pass the examination?
22. A print is chosen at random on the line segment [0, 8]. What is the probability that the chosen point
lies between 5 and 6?
23. A typist types 3 letters correctly for every 50 letters. What is the probability that the fifth letter typed
is the first erroneous letter?
24. In a examination taken by 500 candidates, the mean and standard deviation of marks are found to be
35% and 10% respectively. If the marks saved are normally distributed, find how many will pass if
30% is fixed as minimum?
25. X is normally distributed with mean 10 and standard deviation 5. Find
(i) P(X ≥ 15) (ii) P(1 £ X £ 12).
26. In a cinema hall, the time required for a person to give tickets follows a normal distribution. He gives
ten tickets with an average time of 5 minutes and a standard deviation of 1 minute.
(i) What is the probability that he will take less than 3 minutes to issue 10 tickets?
(ii) What is the probability he will take more than 6 minutes?
1
27. The number of years a TV function is exponentially distributed with X = . What is the probability
that it will work after an additional 8 years? 8
Multiple-Choice Questions
1. Which of the following statement(s) are False?
(a) FX(x) is monotonically increasing on the real line
(b) FX(x) is continuous from the right everywhere
•
(c) FX(x) = Ú f X ( x ) dx
-•
b
(d) P(a £ X £ b) = Ú f X ( x) dx
a
2. If X is a discrete random variable with geometric distribution
Then P(X ª x) for x = 1, 2, •
1
(a) qx (b) pqx–1 (c) (1 – q)x – 1 (d)
qx
2.120 Probability Theory and Random Processes
1 1 1
(a) (b) (c) (d) 1
4 2 8
The Random Variable 2.121
0 1 2 3 x
What is its CDF?
1 3 3 1
(a) d ( x) + d ( x - 1) + d ( x - 2) + d ( x - 3)
8 8 8 8
1 3 3 1
(b) d ( x ) + d ( x + 1) + d ( x + 2) + d ( x + 3)
8 8 8 8
1 3 3 1
(c) u( x ) + u( x + 1) + u( x + 2) + u( x + 3)
8 8 8 8
1 3 3 1
(d) u( x ) + u( x - 1) + u( x - 2) + u( x - 3)
8 8 8 8
11. A random variable X has a pdf
fX(x) = Ce–a|x|
P(X < b) =
(a) 1 – e–ab (b) 1 – e–a (c) 1 – e–a/b (d) 1 – e–b/a
The probability that the system will not fail within two weeks is
(a) 0.5025 (b) 0.6065 (c) 0.4035 (d) 0.3935
34. An one ohm resistor with 10% tolerance has pdf
Ï5 0.9 < r < 1.1
f R (r ) = Ì
Ó0 otherwise
P(0.95 R < 1.05) =
(a) 0.25 (b) 0.5 (c) 0.75 (d) 0.4
INTRODUCTION 3.1
In Chapter 2, we studied about the concept of random variable and its characterization using distribution
and density functions. Also we studied about different types of distributions and discussed the way some of
the real, physical-world random phenomenon can be modeled using the above distributions. In this chapter,
we will study one of the operations that can be performed on a random variable. This operation known as
expectation is used to find the characteristics of a random variable.
EXPECTATION 3.2
Consider a variable X that takes values x1, x2, .., xp with frequencies n1, n2, …, np, then the weighted average
of X is given by
X = (n1 x1 + n2 x2 + + n p x p )/(n1 + n2 + + np ) (3.1)
n1 x1 + n2 x2 + ◊◊◊+ n p x p
=
N
Ên ˆ Ên ˆ Ê np ˆ
= Á 1 ˜ x1 + Á 2 ˜ x2 + ◊◊◊+ Á ˜ x p
ËN¯ ËN¯ ËN¯
Ên ˆ
=  ÁË Ni ˜¯ xi (3.2)
i
Ên ˆ
If X is random then the term Á i ˜ in the above equation is equal to the probability of X taking the value xi,
Ë N¯
which is denoted as P(X = xi) or pX(xi) and X is known as the statistical average of X. Now we can write
X = Â pX ( xi ) xi (3.3)
i
The term X is a number used to find the center of the distribution of a random variable. The term expectation
is used for the process of averaging a random variable.
3.2 Probability Theory and Random Processes
Solved Problems
3.1 If X is the outcome when we roll a fair dice, find the expectation of X.
1
Solution For a fair dice, the sample space is S = {1, 2, …, 6} each outcome with equal probability .
6
Therefore,
E[X] = mX = Â xi pX ( xi )
i
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
= 1Á ˜ + 2 Á ˜ + 3 Á ˜ + 4 Á ˜ + 5 Á ˜ + 6 Á ˜
Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯
7
=
2
3.2 A random variable X takes two values 0 and 1 with equal probability. Find E[X].
Solution
E[X] = Â xi pX ( xi )
i
1 1
Given, pX(0) = ; pX(1) =
2 2
Ê 1ˆ Ê 1ˆ 1
fi E[X] = 0 Á ˜ + 1 Á ˜ =
Ë 2¯ Ë 2¯ 2
3.3 Calculate the expectation of a geometric random variable with PMF P[X = n] = p(1 – p)n–1, n ≥ 1.
Solution
E[X] = Â xn p X ( xn )
n
= Â xn P ( X = n)
n
= Â n p(1 - p)n - 1
n
•
= p  n(1 - p)n - 1 = p  n q n -1 where q = 1 – p
n n =1
•
d n
= p (q )
n =1 dq
Operations on One Random Variable 3.3
d È • n˘ d È q ˘
= p ÍÂ q ˙ = p Í ˙
dq ÎÍ n = 1 ˚˙ dq Î 1 - q ˚
2
Ê 1 ˆ p p 1
= pÁ = = =
Ë 1 - q ˜¯ (1 - q )2 p2 p
1
E[X] =
p
EXPECTATION OF A CONTINUOUS
RANDOM VARIABLE 3.3
If X is a continuous random variable with probability density function of fX(x) then the expectation of the
random variable is given by
•
E[X] = m X = Ú x f X ( x )dx (3.5)
-•
Solved Problem
Solution
•
E[X] = Ú xf X ( x )dx
-•
0
3 3
Solved Problem
3.5 In an experiment the input X and output Y are given by the relation
Y = g(X) = X3/2
If X is a random variable with probability density function
1
fX(x) = ; 2£x£6
4
= 0, otherwise
then find the expected value of Y.
Solution
•
E[Y] = E[g(X)] = Ú g( x ) f X ( x ) dx
-•
• 6
x3 x3 Ê 1 ˆ
= Ú 2 X
f ( x ) dx = Ú 2 ÁË 4 ˜¯
dx
-• 2
6
1 x4 64 - 24
= = = 40
8 4 32
2
3.6 In the circuit shown in Fig. 3.1, R1 is a random variable uniformly distributed on R0–DR and
R0 + DR.
(a) Find an expression for the power distributed in R2 for any
constant voltage V.
(b) Find the mean value of power when R1 is random. (c)
Find the mean power if V = 10 volts, R2 = 100 W, R0 = 150 W Fig. 3.1 Circuit for Solved Problem 3.6
and DR = 10 W.
Solution
2
Ê V ˆ
Power PR2 = Á R2
Ë R1 + R2 ˜¯
The pdf of R1 is
1
fR1(R1) = for R0 – DR £ R1 £ R0 + DR
2 DR
=0 otherwise
Operations on One Random Variable 3.5
•
P = Ú PR2 f R1 ( R1 ) dR1
-•
R0 + DR 2
1 Ê V ˆ
= Ú 2 DR ÁË R1 + R2 ˜¯ R2 dR1
R0 - DR
R0 + DR
V 2 R2 Ê -1 ˆ
=
2 DR ÁË R1 + R2 ˜¯
R0 - DR
V R2 È
2
1 1 ˘
= - Í - ˙
2 DR Î R0 + DR + R2 R0 - DR + R2 ˚
V 2 R2 Ê -2 DR ˆ
= - Á ˜
2 DR Ë ( R0 + R2 ) - ( DR) ¯
2 2
V 2 R2
=
[( R0 + R2 )2 - ( DR)2 ]
Given: V = 10 volts
R0 = 150 W
R2 = 100 W
and DR = 10 W
(10)2 (100)
P = = 0.16
(250)2 - 102
Practice Problems
3.1 Find mean of the discrete random variable X with the following pmf.
Ï1
ÔÔ ; x = 3
PX(x) = Ì 3 (Ans: 5)
Ô2 ; x = 6
ÔÓ 3
3.2 If X is a random variable with the following distribution function
Ï0 x £1
Ô0.3 1 £ x £ 2
ÔÔ
FX(x) = Ì0.5 2 £ x £ 3
Ô0.8 3 £ x £ 4
Ô
ÔÓ 1 x≥4
(a) What is the expected value of X? (Ans: (a)2.4; (b) 0.3 d(x – 1) + 0.2 d(x – 2) + 0.3 d(x – 3) + 0.2 d(x – 4))
(b) What is the pmf of x?
3.6 Probability Theory and Random Processes
X –2 –1 0 1 2
1 1 1 3 1
pX(x)
10 5 5 10 5
Find E[X]. Ê 3ˆ
ÁË Ans : 5 ˜¯
Solved Problems
Solution
X
Given: W = g(X) =
3
8
fX(x) = ;x>2
x3
•
E[W] = E[g(X)] = Ú g( x ) f X ( x )dx
-•
• • •
xÊ 8 ˆ 8 -2 -8 È 1 ˘ 4
= Ú Á
Ë
3 x 3 ˜
¯
dx = Ú
32
x dx = Í ˙
3 Î x ˚2 3
=
2
4
E[W] =
3
• • •
E[g(X)] = Ú g( x ) f X ( x )dx = - x 3 x /4
Ú e e dx = Úe
- ( x /4)
dx
-• 0 0
•
e - x /4
= = –4(–1) = 4
(-1/4)
0
3.9 Find the expected value of the function g(X) = X2, where x is a random variable defined by the density
fX(x) = ae–axu(x) where ‘a’ is constant.
Solution
Given: g(X) = X2
and fX(x) = ae–axu(x)
•
E[g(X)] = Ú g( X ) f X ( x )dx
-•
•
u( x ) = 1 for x ≥ 0,
= Úx
2
ae - ax dx
0
= 0 for x < 0
•
= a Ú x 2 e - ax dx
0
ÈÊ -1ˆ • Ê 2ˆ
•
Ê 2ˆ
•˘
= a ÍÁ ˜ x 2 e - ax - Á 2 ˜ xe - ax - Á 3 ˜ e - ax ˙
ÍË a ¯ 0 Ëa ¯ Ëa ¯ ˙
Î 0 0 ˚
• lim x 2 e - ax = 0
2 - ax 2 x Æ•
= e =
a2 0 a2 lim x e - ax = 0
x Æ•
2
E[g(X)] =
a2
3.10 The nickel contained in an alloy say X, may be considered as a random variable with the following
pdf
x2
fX(x) = 0 £ x £ 5. Find E[X].
100
Solution
• 5
Ê x2 ˆ
E[X] = Ú
-•
x f X ( x )dx =
Ú xÁ ˜ dx
Ë 100 ¯
0
3.8 Probability Theory and Random Processes
5
1 Ê 54 ˆ
5
1 1 x4
=
100 Ú
x 3 dx =
0
100 4
0
=
100 ÁË 4 ˜¯
25
=
16
3.11 The pressure P on the surface of an airplane is given by the relationship P = 0.0025V2. The velocity
V is a uniformly distributed random variable over the interval (0, 5). Find the expected value of P.
Solution
1
Given: P = 0.0025 V2 and fV(v) = ; 0£v£5
5
= 0; otherwise
• 5
Ê 1ˆ
E[P] =
Ú pf
-•
V ( v )dv Ú
= 0.0025V 2 Á ˜ dv
0
Ë 5¯
5
1 v3 0.0025 Ê 125 ˆ
= 0.0025 =
5 3 5 ÁË 3 ˜¯
0
= 0.0208
REVIEW QUESTIONS
1. Explain in detail about expectation. How is the expected value calculated for discrete random
variable?
2. Give the importance of expected value. How is it calculated for a continuous random variable?
Practice Problems
Ú xf
-•
X ( x )dx
E[X|x £ a] = (3.10)
FX (a )
a
where FX(a) = Ú
-•
f X ( x )dx
Solved Problems
Solution
Ï1
Given: fX(x) = ÔÌ 40 10 £ x £ 50
Ô0 otherwise
Ó
3.10 Probability Theory and Random Processes
= 1 (30) = 3
40 4
4
fX(x | x £ 40) = f X ( x ) x £ 40
3
=0 otherwise
40
Ú xf
-•
X ( x )dx
4
40
Ê xˆ
E[X|x £ 40] =
FX (40)
=
3 Ú ÁË 40 ˜¯ dx
10
40
1 x2 1500 50
= = = = 25
30 2 30(2) 2
10
1 - x /b
e
ea / b - x / b
fX(x|x ≥ a) = b - a / b = e for x ≥ a
e b
= 0; otherwise
Operations on One Random Variable 3.11
•
E[X|x ≥ a] = Ú xf
-•
X (x | x ≥ a )dx
•
1 a /b
Ú xe
- x /b
= e dx
b
a
1 a /b È • •˘
= e Í -b xe - x / b - b2 e - x / b ˙
b Î a a ˚
1 a /b È
= e Î abe - a / b + b2 e - a / b ˘˚
b
=a+b
or
•
Úx f
a
X ( x )dx
E[X|x > a] =
P[ x ≥ a ]
•
1
Úxbe
- x |b
dx
a
=
e- a / b
•
1
= ea / b
b Ú
x e - x|b dx
a
1 a /b È - x /b
• •˘
= e ÍÎ -bxe - b2 e - x / b ˙
b a a ˚
1 a /b È
= e Î abe - a / b + b2 e - a / b ˘˚
b
=a+b
Practice Problems
REVIEW QUESTIONS
3. Explain conditional expected value of X, given that an event A has occurred.
E[CX] = Ú
-•
CX f X ( x )dx = C ÚXf
-•
X ( x )dx (3.12)
= CE[X] (3.13)
3. If a and b are constants then
E[aX + b] = aE[X] + b
Proof:
•
E[aX + b, =
Ú (ax + b) f
-•
X ( x )dx
• •
= Ú
-μ
a x f X ( x )dx + Úbf
-•
X ( x )dx
• •
= a
-•
Ú x f X ( x )dx + b Ú
-•
f X ( x )dx
= aE[X] + b
4. If X ≥ 0 then E[X] ≥ 0
Proof:
•
E[X] = Ú xf
-•
X ( x )dx
Operations on One Random Variable 3.13
0 •
= Ú
-•
x f X ( x )dx + Úx f
0
X ( x )dx
Since x ≥ 0
•
E[X] =
Úx f
0
X ( x )dx
E[X] ≥ 0
5. Expectation of the sum of random variables is equal to the sum of expectations.
Proof: If g1(x) and g2(x) are two different functions of a random variable X, then
E[g1(x) + g2(x)] = E[g1(x)] + E[g2(x)]
•
E[g(x)] = Ú g( x ) f
-•
X ( x )dx (3.14)
• •
= Ú
-•
g1 ( x ) f X ( x )dx + Ú g ( x) f
-•
2 X ( x )dx
= E[g1(x)] + E[g2(x)]
REVIEW QUESTIONS
4. Explain the properties of expectations
5. Prove that (a) E[aX + b] = aE[X] + b
(b) E[CX] = CE[X]
(c) E[g1(x) + g2(x)] = E[g1(x)] + E[g2(x)]
Solved Problems
Solution
Ïp px
for - 4 £ x £ 4
fX(x) = ÔÌ 16
cos
Given: 8
Ô 0 elsewhere
Ó
(a) Using Property 2,
we know that E[3X] = 3E[X]
• 4
p Ê pxˆ
E[X] = Ú
-•
x f X ( x )dx = Ú x 16 cos ÁË
-4
8 ˜¯
dx
4
p Ê pxˆ
=
16 Ú x cos ÁË
-4
8 ˜¯
dx
È 4 4 ˘
p Í 8x px 8 px˙
=
Í
16 p
sin
8 -4
-
p Ú sin
8 ˙
Î -4 ˚
È1 px
4
4 px
4 ˘
= Í x sin + cos ˙
ÍÎ 2 8 p 8 -4 ˙
-4 ˚
p Ê pˆ 4 p 4 Ê pˆ
= 2 sin - ( -2)sin Á - ˜ + cos - cos Á - ˜
2 Ë 2¯ p 2 p Ë 2¯
È p ˘
=0 Í∵ cos 2 = 0 ˙
Î ˚
E[3X] = 3E[X] = 0
•
Úx
2 2
(b) E[X ] = f X ( x )dx
-•
4 4
p Ê pxˆ p Ê pxˆ
Ú x2 Úx
2
= cos Á ˜ dx = cos Á ˜ dx
16 Ë 8 ¯ 16 Ë 8 ¯
-4 -4
p È8 2 px˘
4 2 4 2 4
px Ê 8ˆ px Ê 8ˆ
= Í x sin
16 Í p 8 -4
+ 2 x Á ˜ cos
Ëp¯ 8 -4
- 2Á ˜
Ëp¯ Ú cos ˙
8 ˙
Î -4 ˚
4 4 4
1 2 px 8x px 64 px
= x sin + cos - sin
2 8 -4 p 8 -4 p2 8 -4
1È p Ê p ˆ˘ 8 È p Ê p ˆ ˘ 64 È p Ê p ˆ˘
= Í16 sin - 16 sin ÁË - ˜¯ ˙ + Í4 cos - 4 cos ÁË - ˜¯ ˙ - 2 Ísin - sin ÁË - ˜¯ ˙
2Î 2 2 ˚ pÎ 2 2 ˚ p Î 2 2 ˚
Operations on One Random Variable 3.15
1 64 128
= (32) - 2 (2) = 16 - 2
2 p p
Ï5e -5 x 0£ x£•
3.15 The density function of a random variable X is fX(x) = ÔÌ
ÔÓ 0 elsewhere
2
Find (a) E[X], (b) E[(X – 1) ], and (c) E[3X – 1].
Solution
• • •
(a) E[X] = Ú x f X ( x )dx = Ú x [5e -5 x ]dx = 5 Ú x e -5 x dx
-• 0 0
È x e -5 x ˘
• •
• 1 -5 x •
= 5 Í - e -5 x +
Í 5 5 Ú
dx ˙ = - xe -5 x
˙ 0
-
5
e
0
Î 0 0 ˚
1 1 lim xe -5 x = 0
= (0 - 0) - [0 - 1] =
5 5 x Æμ
1
fi E[X] =
5
• • •
-5 x 2 -5 x
E[X2] = Ú x 2 f X ( x )dx = Ú x (5e )dx = 5 Ú x e dx
2
-• 0 0
È 2 -5 x • • ˘
-x e 2 x -5 x • 2
= 5Í - e + Ú e -5 x dx ˙ lim x 2 e -5 x = 0
Í 5 25 0 25 0 ˙ x Æ•
Î 0 ˚
È 2 Ê 1 ˆ -5 x • ˘
= 5 Í(0 - 0) - (0 - 0) + - e ˙
Î 25 ÁË 25 ¯˜ 0˚
È 2 ˘ 2
= 5 Í- (0 - 1)˙ =
Î 125 ˚ 25
2
E[X2] =
25
(b) E[(X – 1)2] = E[X2 – 2X + 1]
= E[X2] – E[2X] + E[1]
= E[X2] – 2E[X] + 1
2 2 17
= - +1=
25 5 25
2 17
fi E[(X – 1) ] =
25
(c) E[3X – 1] = 3E[X] – 1
3.16 Probability Theory and Random Processes
È1˘ 2
= 3Í ˙ - 1 = -
Î5˚ 5
Practice Problems
MOMENTS 3.7
Moments of a random variable X are of two types:
(i) Moments about origin
(ii) Central moments
If n = 0, we get the area of the function fX(x) which is equal to 1. While n = 1 is equal to E[X]. The second
moment about origin is known as mean square of X (mean square value) given by
•
m2 = E[X2] = Ú x 2 f X ( x )dx (3.16)
-•
If X is a discrete random variable then nth order moment about origin is given by
E[Xn] = Â xin pX ( xi ) (3.17)
i
For discrete random variable the nth order central moment is given by
 ( xi - m X )
n
mn = E[(X – mx)n] = pX ( xi ) (3.19)
i
The positive square root sX of variance is called standard deviation. It is a measure of the spread in a pdf
or pmf. If a random variable has a concentrated pdf or pmf, it will have a small variance. If it has a widely
spread pdf or pmf, it will have a large variance.
Consider three random variables X1, X2 and X3 with same mean and different variances. From Fig. 3.2 we
find that X1 has smallest spread about the mean whereas X3 has largest spread about the mean. In other words,
most of the values of X1 are close to the mean value, whereas the values of X3 that are close to the mean are
very less. In terms of variance, we say that X1 has smallest variance while X3 has largest variance.
From Eq. (3.21), we have
2
sX2 = E[(X – mX)2] = E[X2 – 2X mX + m X ]
= E[X2] – 2mXE[X] + E[ m X2 ]
REVIEW QUESTIONS
6. If X is random variable, show that Var (aX + b) = a2 Var (X).
7. Define variance and explain its properties.
Solved Problems
3.16 A random variable X denotes the outcome of throwing a fair dice. Find the mean and variance.
Solution When we throw a dice, the sample space is S = {1, 2, 3, 4, 5, 6}. Let X denotes the outcome.
1
Then the probability of each outcome is
6
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
= 1Á ˜ + 2 Á ˜ + 3 Á ˜ + 4 Á ˜ + 5 Á ˜
Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯
1 21 7
= [1 + 2 + 3 + 4 + 5 + 6] = = = 3.5
6 6 2
E[X2] = Â xi2 p X ( xi )
i
Operations on One Random Variable 3.19
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
= 12 Á ˜ + 22 Á ˜ + 32 Á ˜ + 42 Á ˜ + 52 Á ˜ + 62 Á ˜
Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯ Ë 6¯
= 1 [12 + 22 + 32 + 42 + 52 + 62 ]
6
91
=
6
2
sX = E[X2] – {E[X]}2 = 91 - ÊÁ 7 ˆ˜ = 35
2
6 Ë 2¯ 12
Solution
mX = E[X] = Â xi p X ( xi )
i
= 0 ÊÁ ˆ˜ + 2 ÊÁ ˆ˜ + 4 ÊÁ ˆ˜ + 6 ÊÁ ˆ˜ = 3
1 1 1 1
Ë 4¯ Ë 4¯ Ë 4¯ Ë 4¯
Ê 1ˆ Ê 1ˆ Ê 1ˆ
E[X2] = Â xi2 p X ( xi ) = (02 )
1
+ 22 Á ˜ + 42 Á ˜ + 62 Á ˜ = 14
i 4 Ë 4¯ Ë 4¯ Ë 4¯
sX2 = E[X2] – {E[X]}2
= 14 – (3)2 = 5
3.18 When two unbiased dice are thrown simultaneouly, find expected value of the sum of numbers
shown on the dice.
Solution When two dice are thrown simultaneously, the sample space is
S = {(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (1, 6), (2, 1), (2, 2), (2, 3), (2, 4), (2, 5), (2, 6), (3, 1), (3, 2), (3, 3),
(3, 4), (3, 5), (3, 6), (4, 1), (4, 2), (4, 3), (4, 4), (4, 5), (4, 6), (5, 1), (5, 2), (5, 3), (5, 4), (5, 5), (5, 6), (6, 1),
(6, 2), (6, 3), (6, 4, (6, 5), (6, 6)}
Let X be the random variable which denotes the sum of the numbers shown on the dice. Then we can
write
1 3
P [ X = 2] = ; P [ X = 3] = 2 ; P [ X = 4] = ; P [ X = 5] =
4
; P [ X = 6] = 5 ;
36 36 36 36 36
6 5 4 3
P [ X = 7] = ; P [ X = 8] = ; P [ X = 9] = ; P [ X = 10] = ;
36 36 36 36
2 1
P [ X = 11] = ; P [ X = 12] = ;
36 36
3.20 Probability Theory and Random Processes
12
E[X] = Â xi p X ( xi )
i=2
= 2 ÊÁ ˆ˜ + 3 ÊÁ ˆ˜ + 4 ÊÁ ˆ˜ + 5 ÊÁ ˆ˜ + 6 ÊÁ ˆ˜ + 7 ÊÁ ˆ˜
1 2 3 4 5 6
Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯
Ê 5ˆ Ê 4ˆ Ê 3ˆ Ê 2ˆ Ê 1ˆ
+ 8 Á ˜ + 9 Á ˜ + 10 Á ˜ + 11 Á ˜ + 12 Á ˜
Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯
= 2 + 6 + 12 + 20 + 30 + 42 + 40 + 36 + 30 + 22 + 12
36
252
= =7
36
Find (i) E[X], (ii) E[2X + 5], (iii) E[X2], (iv) E[(X + 2)2].
Ê 2ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
= - 2 Á ˜ + ( -1) Á ˜ + 0 Á ˜ + 1 Á ˜ + 2 Á ˜
Ë 5¯ Ë 10 ¯ Ë 5¯ Ë 5¯ Ë 10 ¯
= -8 - 1 + 0 + 2 + 2
4 1 1 2
= - - +0+ +
5 10 5 10 10
= –0.5
= 8 + 1 + 1 + 4 = 16 + 1 + 2 + 4 = 23 = 2.3
5 10 5 10 10 10
(iv) E[(X + 2)2] = E[X2 + 4X + 4]
= E[X2] + 4E[X] + E[4]
= 2.3 + 4(–0.5) + 4
= 4.3
n
3.20 The random variable X takes the values xn = n; n = 1, 2… with probability pX(xn) = ÊÁ ˆ˜ .
1
Find the mean value of X. Ë 2¯
Solution
E[X] = Â xi pX ( xi )
i
• •
= Â xn p X ( xn ) = Â n p X ( xn )
n =1 n =1
n
Given: pX(xn) = ÊÁ 1 ˆ˜
Ë 2¯
• n
Ê 1ˆ
E[X] =  n ÁË 2 ˜¯
n =1
•
x
We know  xn = 1 - x
n =1
Differentiating on both sides, we get
•
1
 nx n - 1 = (1 - x)2
n =1
μ
x
fi  nx n = (1 - x)2
n =1
1
• n
Ê 1ˆ
fi  n ÁË 2 ˜¯ = 2 2 = 2
n =1 Ê 1ˆ
ÁË1 - ˜¯
2
E[X] = 2
3.22 Probability Theory and Random Processes
3.21 Find the mean and variance of the random variable whose pdf is given in Fig. 3.3.
f x( x)
1
—
2
0 2 4 x
2 4
Ê xˆ Ê xˆ
= Ú x ÁË 4 ˜¯ dx + Ú x ÁË1 - 4 ˜¯ dx
0 2
2 4 4
1 2 1
4 Ú0
= x dx + Ú x dx - Ú x 2 dx
2
42
È 2˘ 4 4
1 Í x3 ˙ + 1 x2 1 x3
= -
4Í 3 ˙ 2 4 3
Î 0˚ 2 2
2 56
= +6- =2
3 12
The variance sX2 = E [ X 2 ] - m 2X
μ
E[X2] = Úx
2
f X ( x)dx
•
Operations on One Random Variable 3.23
2 4
2 Ê xˆ 2Ê xˆ
= Ú x ÁË 4 ˜¯ dx + Ú x ÁË1 - 4 ˜¯ dx
0 2
4 2 4 4
= -1 x x3 1 x4
+ -
4 4 3 4 4
0 2 2
= 1 + 56 + 15 = 104
3 3
sX2 = E [ X 2 ] - m 2X
104 92
= - (2) 2 =
3 3
Practice Problems
3.12 Find E[X2] and variance of discrete random variance of Practice Problem 3.1. (Ans. 27, 2)
3.13 Find E[X2] and variance of discrete random variable of Practice Problem 3.2. (Ans. 7, 1.24)
Ê 5 5 ˆ
3.14 For the random variable given in Practice Problem 3.3, find E[X2], E[X3] and variance. ÁË Ans : 7 , 8 , 0.0198˜¯
3.15 A random variable has the following pdf:
Ï x 0 £ x £1
Ô3 x
fX(x) = ÔÌ - 1£ x £ 3
Ô4 4
ÔÓ 0 otherwise
Solved Problems
Solution
l - l | x|
Given: fX(x) = e ; l > 0 -• < x •
2
x
The CDF FX(x) = Ú f X (u )du
-•
3.24 Probability Theory and Random Processes
For x < 0,
x
l lu
FX(x) = Ú 2
e du
-•
x x
l l elu
2 -Ú•
lu
= e du =
2 l
-•
1 lx
= e
2
For x > 0,
0 x
l lu l - lu
FX(x) = Ú 2 e du + Ú 2 e du
-a 0
x
l Ê 1 ˆ l e - lu
= Á ˜+
2 Ë l ¯ 2 (-l )
0
= 1 - 1 (e - l x - 1) = 1 - 1 e - l x
2 2 2
= 1 - 1 e - l x for x ≥ 0
2
• 0 •
Ê l lx ˆ Ê l -l x ˆ
mx = E[X] = Ú x f X ( x) dx = Ú x ÁË 2 e ˜¯ dx + Ú x ÁË 2 e ˜¯ dx
-• -• 0
0 •
l l -l x
=
2 Ú x e l x dx +
2 Ú xe dx
-μ 0
Ï 0 ¸ 0Ï -l x • •¸
l Ô x lx el x Ô l Ô - xe e- l x Ô
= Ì e - 2 ˝+ Ì - ˝
2 Ôl -• l -• Ô 2 Ô l l2 Ô˛
Ó ˛ Ó 0 0
l Ï 1 ¸ l Ï Ê 1 ˆ¸
= Ì(0 - 0) - 2 ˝ + Ì(0 - 0) - Á 0 - 2 ˜ ˝
2 Ó l ˛ 2 Ó Ë l ¯˛
=0
•
E[X2] = Úx
2
f X ( x)dx
•
Operations on One Random Variable 3.25
0 •
l lx l
= Ú x2 e dx + Ú x 2 e - l x dx
-•
2 0
2
0 •
l l
e - l x dx
2 -Ú• Úx
= x 2 e l x dx + 2
2 0
Ï 0 0 ¸
l Ô x 2 el x 2 xe l x 2 0 Ô
= Ì - + el x ˝
2Ô l l2 l2 l3 -•
Ô˛
Ó -• -•
Ï • • •¸
l Ô - x 2 e- l x 2 xe - l x 2e - l x Ô
+ Ì - 2
- 3 ˝
2Ô l l l Ô˛
Ó 0 0 0
lÏ 2 2¸ l Ï 2 2 ¸
= Ì(0 - 0) - 2 (0 - 0) + 3 ˝ + Ì(0 - 0) - 2 (0 - 0) - 3 ( -1)˝
2Ó l l ˛ 2 Ó l l ˛
2
=
l2
x
3.23 Suppose fX(x) = for 0 < x < 4. Determine the mean and variance of X.
8
Solution
x
Given: fX(x) =
8
• 4 4
Ê xˆ 1 x3
mX = E[X] = Ú xf X ( x)dx = Ú x ÁË 8 ˜¯ dx = 8 3
-• 0 0
1 È 64 - 0 ˘ 8
= =
8 ÍÎ 3 ˙˚ 3
• 4 4 4
2 Ê xˆ x3 1 x4
E[X2] = Ú x 2 f X ( x)dx = Ú x ÁË 8 ˜¯ dx = Ú 8
dx =
8 4
=8
-• 0 0 0
•
Solution For a valid pdf, Ú f X ( x) dx = 1
-•
• 2 4
Ú f X ( x) dx = Ú kx dx + Ú k (4 - x) dx = 1
-• 0 2
2 2 4
x x2
k +k 4 x |42 -k =1
2 2
0 2
2k + 8k – 6k = 1
1
k=
4
•
mX = E[X] = Ú x f X ( x)dx
-•
2 4
= Ú x (kx)dx + Ú x k (4 - x)dx
0 2
2 4
1 2 1
= Ú
40
x dx + Ú x(4 - x) dx
42
Ï 4 2 4¸
1 x3 1 Ô x2 x3 Ô
= + Ì4 - ˝
4 3 4Ô 2 3 Ô
0 Ó 2 2˛
2 1Ï 56 ¸ 2 1 Ê 16 ˆ
= + Ì24 - ˝ = + Á ˜ = 2
3 4Ó 3 ˛ 3 4Ë 3 ¯
mX = 2
•
E[X2] = Ú x 2 f X ( x) dx
-•
2 4
1 3 1
4 Ú0
= x dx + Ú x 2 (4 - x) dx
42
Operations on One Random Variable 3.27
2 Ï 4 4¸
1 x4 1 Ô x3 x4 Ô
= + Ì4 - ˝
4 4 4Ô 3 4 Ô
0 Ó 2 2˛
1 Ï Ê 56 ˆ ¸
= 1+ Ì4 Á ˜ - 60˝
4Ó Ë 3 ¯ ˛
14
=
3
2 2
sX2 = E [ X ] - {E[ X ]}
14 2
= - (2) 2 =
3 3
x
CDF, FX(x) = Ú f X (u )du
-•
fi FX(x) = 0 for x < 0
For 0 £ x £ 2,
x x x
1 1 1 u2 x2
FX(x) = Ú 4 u du = Ú 4 u du = 4 2 =
8
-• 0
For 2 £ x £ 4 0
2 x
1 1
FX(x) = Ú 4 x dx + Ú 4 (4 - x)dx
0 2
2 Ï x¸
1 x2 1 Ô x x2 Ô
= + Ì4 x |2 - ˝
4 2 4Ô 2
0 Ó 2Ô
˛
1 1 ÔÏ x2 Ô¸
= + Ì4( x - 2) - + 2˝
2 4 ÔÓ 2 ˛Ô
- x2
= + x -1
8
FX(x) = 1 for x ≥ 4
3.25 Find the expected value of the function g(X) = X3 where X is a random variable with pdf fX(x) =
1 - x /2
e u ( x) .
2
Solution
•
E[g(x)] = Ú g ( x) f X ( x)dx
-•
3.28 Probability Theory and Random Processes
•
1 - x /2
= Ú 2e x3 dx
0
1 ÏÔ 3 - x /2
• • •
• ¸Ô
= Ì- 2 x e + 12 x 2 e - x /2 - 48 x e - x /2 + 48 Ú e - x /2 dx ˝
2 ÓÔ
0 0 0
0 ˛Ô
Ï •¸
1Ô e - x /2 Ô
= Ì-2(0) + 12(0 - 0) - 48(0 - 0) + 48 ˝ = 48
2Ô (-1/2) 0 Ô
Ó ˛
3.26 A random variable X represents the value of coins (in rupees) given in change when purchases are
made at a particular store. Suppose the probability of ` 0.5, ` 1, ` 2, ` 5, ` 10, being present in change are
1 1 2 1 1
, , , and respectively. (i) Write an expression for the pdf of X. (ii) Find the mean of X.
10 5 5 5 10
Solution
fX(x) = 0.1 d (x – 0.5) + 0.2 d (x – 1) + 0.4 d (x – 2) + 0.2 d (x – 5) + 0.1 d x(–10)
E[X] = mX = 0.5(0.1) + (0.2) (1) + (0.4) (2) + (0.2) 5 + (0.1) 10
= `3.05
Practice Problems
Solved Problems
3.27 The first four moments of a distribution about x = 4 are 1, 4, 10 and 45 respectively. Show that the
mean is 5, variance is 3, m3 = 0 and m4 = 26.
Solution
Given: E[X – 4] =1
E[(X – 4)2] =4
E[(X – 4)3] = 10
E[(X – 4)4] = 45
fi E[X] = mX = 5 fi m1 = 5
2
E[(X – 4) ] = 4 fi E[X2 – 8X + 16] = 4
from which E[X2] = 28 fi m2 = 28
E[(X – 4)3] = 10 fi E[X3 – 12X2 + 48X – 64] = 10
E[X3] = 10 + 12 E[X2] – 48E[X] + 64
= 10 + 12(28) – 48(5) + 64 = 170
m3 = 170
fi E[(X – 4)4] = 45
E[X4] – 16E(X3) + 96E[X2] – 256 E[X] + 256 = 45
fi E[X4] = 45 + 16E[X3] – 96E[X2] + 256E[X] – 256
= 45 + 16(170) – 96(28) + 256(5) – 256
= 1101
fi m4 = 1101
m1 = mX = E[X] = 5
sX2 = Var (X) = E[X2] – {E(X)}2
= 28 – (5)2 = 3
m3 = m3 – 3mX sX2 – m13 = 170 – 3(5) (3) – (5)3
=0
m4 = E[(X – mX)4] = E[X4] – 4mXE[X3] + 6 m X2 E[ X 2 ] - 4 m 3X E[ X ] + m X4
= m4 – 4m1 m3 + 6m2 m12 – 3m14
= 1101 – 4(5) (170) + 6(28) (5)2 – 3(5)4 = 26
3.30 Probability Theory and Random Processes
n
n!
= Â x x !(n - x)! p x (1 - p)n - x (3.29)
x=0
n
n(n - 1)!
= Â ( x - 1)!(n - x)! p ◊ p x - 1 (1 - p)n - x
x =1
n
(n - 1)!
= np  p x - 1 (1 - p)( x - 1) - ( x - 1)
x =1 ( x - 1)![( n - 1) - ( x - 1)]!
n
Ê n - 1ˆ x - 1
= np  Á ˜ p (1 - p)
( n - 1) - ( x - 1)
x = 1 Ë x - 1¯
n -1
Ê n - 1ˆ x
= np  ÁË x ˜¯ p (1 - p)
n -1- x
(3.30)
x=0
n -1
Ê n - 1ˆ x n - 1 - x
Similarly (p + q)n – 1 =  ÁË x ˜¯ p (q ) (3.32)
x=0
Variance
sX2 = Var (X) = E[X2] – {E[X]}2
n
E[X2] = Â xi2 p [ xi ]
i=0
n
Ê nˆ
=  [ x( x - 1) + x] ÁË x˜¯ p x (1 - p)n - x
x=0
n
Ê nˆ n
Ê nˆ
=  x( x - 1) ÁË x˜¯ p x (1 - p)n - x +  x ÁË x˜¯ p x (1 - p)n - x
x=0 x=0
For x = 1, the first summation is zero. The second summation is equal to E[X].
Therefore,
n
Ê nˆ
E[X2] =  x( x - 1) ÁË x˜¯ p x (1 - p)n - x + E[ X ]
x=2
n
n!
= Â x( x - 1) x !(n - x)! p x (1 - p)n - x + E[ X ]
x=2
n
n!
= Â ( x - 2)!(n - x)! p x (1 - p)n - x + E[ X ]
x=2
n
n(n - 1)(n - 2)!
= Â ( x - 2)!(n - x)! p x (1 - p)n - x + E[ X ]
x=2
n
(n - 2)!
= n(n - 1) Â p2 p x - 2 (1 - p)( n - 2) - ( x - 2) + E[ X ]
x=2 ( x - 2)![( n - 2) - ( x - 2)]!
n
Ê n - 2ˆ
= n(n - 1) p2  ÁË x - 2˜¯ p x - 2 (1 - p)(n - 2) - ( x - 2) + E[ X ]
x=2
n-2
Ê n - 2ˆ x
= n(n - 1) p2  ÁË x ˜¯ p (1 - p)
( n - 2) - x
+ E[ X ]
x=0
Solved Problems
3.28 If the mean and variance of the binomial distribution are 6 and 1.5 respectively, find
E[X – P[X ≥ 3]].
ÏÊ 8ˆ Ê 8ˆ Ê 8ˆ ¸
= 1 - ÔÌÁ ˜ p0 (1 - p)8 + Á ˜ p1 (1 - p)7 + Á ˜ p2 (1 - p)6 Ô˝
ÓÔË 0 ¯ Ë 1¯ Ë 2 ¯ Ô˛
3.29 A 40 student B. Tech class contains 25 boys and 15 girls. If a teacher ask a question, the probability
that boys knowing the answer is 1/5 and that of girls is 3/5. Let X denote a random variable that denotes the
number of students who know the answer to a question that teacher asks in class. Find mX and Var (X).
Ê 7ˆ
E[X] = np = 40 Á ˜ = 14
Ë 20 ¯
3.30 For a binomial distribution mean is 6 and standard deviation is 2 . Find the first two terms of the
distribution.
Variance = npq = ( 2 )2 = 2
npq 2 1
q= = =
np 6 3
1 2
p = 1 – q = 1- =
3 3
Ê 2ˆ
np = n Á ˜ = 6 fi n = 9
Ë 3¯
The first two terms of the distribution is given by
Ê nˆ
P(X = x) = Á ˜ p x q n - x ; x = 0, 1
Ë x¯
0 9 9
Ê 9ˆ Ê 2 ˆ Ê 1 ˆ Ê 1ˆ
P(X = 0) = Á ˜ Á ˜ Á ˜ = Á ˜
Ë 0¯ Ë 3 ¯ Ë 3 ¯ Ë 3¯
1 8
Ê 9ˆ Ê 2 ˆ Ê 1 ˆ
P(X = 1) = Á ˜ Á ˜ Á ˜ = 9.14 ¥ 10 -4
Ë 1¯ Ë 3 ¯ Ë 3 ¯
3.31 6 dice are thrown 729 times. How many times do you expect atleast three dice to show 5 or 6.
3.34 Probability Theory and Random Processes
Solution When a dice is thrown, there are 6 possible outcomes. Probability of getting 2 or 3 is given by
2 1
= .
6 3
1 1 2
Since p = ; q = 1 - p = 1 - =
3 3 3
Given n = 6
P(getting at least three dice to show 5 or 6)
= P(X ≥ 3) = P(X = 3) + P(X = 4) + P(X = 5) + P(X = 6)
3 3 4 2 5 6 0
Ê 6ˆ Ê 1 ˆ Ê 2 ˆ Ê 6ˆ Ê 1 ˆ Ê 2 ˆ Ê 6 ˆ Ê 1 ˆ Ê 2 ˆ Ê 6ˆ Ê 1 ˆ Ê 2 ˆ
= Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜ +Á ˜ Á ˜ Á ˜
Ë 3¯ Ë 3 ¯ Ë 3 ¯ Ë 4¯ Ë 3 ¯ Ë 3 ¯ Ë 5 ¯ Ë 3 ¯ Ë 3 ¯ Ë 6¯ Ë 3 ¯ Ë 3 ¯
160 + 60 + 12 + 1 233
= =
729 729
233
The number of times that one can expect three dice to show 5 or 6 is equal to (729) = 233 times.
729
3.32 Four coins were tossed on a table 100 times. The number of heads fallen in each of the 100 times
was noted. The results are
x 0 1 2 3 4
f 6 30 32 26 6
Fit a binomial distribution to the data. Compare these frequencies with the expected frequencies.
Practice Problem
3.24 50 balls are tossed into 25 boxes. What is the expected number of balls in the fifth box? (Ans. 2)
Operations on One Random Variable 3.35
REVIEW QUESTION
8. For the binomial density, show that E[X] = np and sX2 = np(1 – p)
mX = Â xP( X = x)
•
•
Ê x + k - 1ˆ k x
=  x ÁË k - 1˜¯
p q
x =0
Ê k ˆ k Ê k + 1ˆ k 2
= Á ˜ p q + 2Á p q +
Ë k - 1¯ Ë k - 1˜¯
k (k + 1) k 2
= k pk q + 2 p q +
2!
È k +1 (k + 2) (k + 1) 2 ˘
= k p k q Í1 + q+ q + ˙
Î 1! 2! ˚
= kpkq (1 – q)–(k + 1)
kq
= k pk q p–(k + 1) = (3.36)
p
The variance is given by
Var(X) = E[X2] – {E[X]}2
= E[X(X – 1) + X] – {E[X]2}
= E[X(X – 1)] + E[X] – {E[X]}2 (3.37)
Ê k + 1ˆ k 2 Ê k + 2ˆ k 3 Ê k + 3ˆ k 4
= 2Á ˜ p q + 3(2) Á ˜ p q + 4(3) Á p q +
Ë k - 1¯ Ë k - 1¯ Ë k - 1˜¯
k (k + 1) k 2 (k + 2)(k + 1)k k 3 (k + 3)(k + 2)(k + 1)(k ) k 4
= 2 p q + 3(2) p q + 4(3) p q +
2! 3! 4!
È (k + 2)(k + 3) 2 ˘
= k(k + 1) pkq2 Í1 + (k + 2)q + q + ˙
Î 2 ˚
= k(k + 1) pkq2 [(1– q)–(k + 2)] (3.38)
3.36 Probability Theory and Random Processes
ËÁ p ¯˜ p p2
kq È (k + 1)q kq ˘
= Í +1- ˙
p Î p p˚
kq Ê p + q ˆ kq
= =
p ÁË p ˜¯ p2
k (1 - p)
= (3.40)
p2
REVIEW QUESTIONS
9. Derive the expressions for mean and Variance of a negative binomial distribution.
x =1
p 1
= p(1 – q)–1 = 2
= (3.42)
p p
The variance is given by
Var(X) = E(X2) – {E[X]}2
•
E[X(X – 1)] = Â x( x - 1) pq x -1
x =1
•
= Â x( x - 1) pq x -1
x=2
Operations on One Random Variable 3.37
•
È x( x - 1) x - 2 ˘
= 2 pq  ÍÎ 2
q ˙
˚
x=2
2q
2 pq (1 – q)–3 = (3.43)
p2
Var(X) = E[X2] – {E[X]}2 = E[X(X – 1)] + E[X] – {E{X]}2
2q 1 1 1- p q
= 2 + - 2 = 2 = 2 (3.44)
p p p p p
Solved Problems
3.33 If the probability of success on each trial is 0.2, find the expected number of trials required before
first success.
Solution Given: p = 0.3 from which we get q = 1 – p = 1 – 0.2 = 0.8. Let X represents the number of trials
required to get first success. Then X follows geometric distributions. The expected number of trials required
before first success is
1 1
E[X] = = =5
p 0.2
3.34 The cost of conducting an experiment is ` 500. The experiment is continued until the first successful
result is achieved. If the experiment fails, an additional amount of ` 150 is required to meet additional cost.
If the probability of success is 0.25 and individual trials are independent, what is the expected cost of the
entire procedure.
P(N = k | N £ m)
P( N = k , N £ m) P( N = k )
= =
P( N £ m) P( N £ m)
P(N = k) = pqk –1 = p(1 – p)k –1
m m
P(N £ m) = Â p q k -1 = Â p(1 - p)k -1
k =1 k =1
3.36 In your key bunch there are exactly 5 keys. If you try to open the door using the keys one after the
other, what is the expected number of keys you will have to try before the door is opened.
Solution The bunch contains 5 keys. Therefore the probability that a key opens the door on any trial is
1/5. Let X be a random variable that denotes the number of trials required
pX(x) = A p(1 – p)x – 1 x = 1, 2, .... 5
The value of A can be obtained by equating
5
 A p (1 - p) x -1 = 1
x =1
Ap[{1 + (1 - p) + (1 - p)4 } = 1
ÏÔ 1 - (1 - p)5 ¸Ô
A pÌ ˝ =1
ÔÓ 1 - (1 - p) Ô˛
1
fi A=
1 - (1 - p)5
p(1 - p) x -1
pX(x) = x = 1, 2, 5
1 - (1 - p)5
Operations on One Random Variable 3.39
n 5
x p(1 - p) x -1
 x pX ( x ) =  1 - (1 - p)5
x =1 x =1
p ÔÏ 5 Ô¸
5 ÌÂ
= x(1 - p) x -1 ˝
1 - (1 - p) ÔÓ x =1 Ô˛
1/ 5 ÏÔ Ê 4ˆ Ê 4ˆ
2
Ê 4ˆ
3
Ê 4 ˆ ¸Ô
4
= 5 Ì
1 + 2 ÁË 5 ˜¯ + 3 ÁË 5 ˜¯ + 4 ÁË 5 ˜¯ + 5 ÁË 5 ˜¯ ˝ = 2.559
Ê 4 ˆ ÓÔ ˛Ô
1- Á ˜
Ë 5¯
REVIEW QUESTIONS
10. Derive expressions for the mean and Variance of a geometric distribution.
•
lx •
l (l x - 1 )
= e- l  ( x - 1)! = e- l Â
x =1 x = 1 ( x - 1)!
•
lx -1
= l e- l  ( x - 1)!
x =1
È 2 ˘
= l e - l Í1 + l + l + ...˙ = le–l(el)
Î 1! 2! ˚
=l
E[X] = l (3.46)
Variance
sX2 = Var [X] = E[X2] – {E[X]}2
μ
x 2 e- l l x
•
E[X2] = Â x 2 pX ( x ) = Â x!
x=0 x=0
3.40 Probability Theory and Random Processes
μ
x( x - 1) + x - l x
= Â x!
e l
x=0
μ
x( x - 1) e - l l x •
e- l l x
= Â x!
+ Â x!
x=0 x=0
Since the first two terms in the summation are zero,
μ
x( x - 1) e - l l x
E[X2] = Â + E[X]
x=2 x!
•
x( x - 1) e - l l x - 2 l 2
E[X2] = Â x( x - 1)( x - 2)! + E[ X ]
x=2
μ
e- l l x - 2
= l2 Â + E[ X ]
x = 2 ( x - 2)!
È 2 ˘
= l 2 e - l Í1 + l + l + ◊◊◊˙ + E[ X ] (3.47)
Î 1! 2! ˚
= l 2 e - l (e l ) + l = l 2 + l
sX2 = Var [X] = l2 + l – l2 = l (3.48)
sX2=l
For a Poisson random variable, the mean and variance are same.
REVIEW QUESTIONS
11. Prove that for a Poisson random variable, the mean and variance are same.
12. Derive the expression for mean and variance of a Poisson random variable.
Solved Problems
3.37 The number of customers that enter a bank in an hour is a Poisson random variable and suppose that
P[X = 0] = 0.223. Determine the mean and variance of X.
e- l l x
P(X = x) =
x!
e- l l1 e- l l 2 l2
Therefore, = fil=
1! 2! 2
2
l – 2l = 0; l(l – 2) = 0
fi l=2
e- l l y
P(Y = y) =
y!
e- l l 2 e- l l 3 l2
fi = fi l2 =
2! 3! 3
l3 – 3l2 = 0; l2 (l – 3) = 0
fi l =3
For a Poisson random variable, variance = l
Hence Var(X) = 2, Var(Y) = 3
Var(X – 2Y) = Var(X) + 4 Var(Y)
= 2 + 4(3) = 14
b b
Ê 1 ˆ 1
= Ú x ÁË b - a ˜¯ dx =
b-a Ú x dx
a a
b
1 x2 b2 - a 2 b + a
= = =
b-a 2 a
2(b - a ) 2
3.42 Probability Theory and Random Processes
E[X] = b + a (3.50)
2
Variance
sX2 = E[ X 2 ] - {E[ X ]}2
μ
E[X2] = Ú x 2 f X ( x )dx
-•
b
b
Ê 1 ˆ 1 x3 b3 - a 3
= Úx Á 2
dx = =
a
Ë b - a ˜¯ b-a 3 a 3(b - a )
= b + ab + b
2 2
(3.51)
3
sX2 = E[X2] – {E[X]}2
2
a 2 + ab + b2 Ê b + a ˆ
= -Á
3 Ë 2 ˜¯
b2 + ab + a 2 b2 + 2 ab + a 2
= -
3 4
= b - 2 ab + a
2 2
(3.52)
12
= (b - a )
2
12
Solved Problems
3.39 Find the nth moment of uniform random variable and hence its mean.
1
fX(x) = for a £ x £ b
b-a
=0 otherwise
Therefore,
b b
Ê 1 ˆ È xn + 1 ˘
E[Xn] = Úx 1
n
ÁË b - a ˜¯ dx = Í ˙
a b-a Î n + 1˚ a
1 È (b ) n + 1 - a n + 1 ˘
= Í ˙ (3.53)
b-a Î n +1 ˚
Operations on One Random Variable 3.43
3.40 Rounding of errors X are uniformly distributed. If the sixth decimal place of a calculator will
be rounded find the mean and variance of X. Find the probability that the numerical error is in between
0.000001 and 0.000004.
Solution Since the sixth decimal place of a calculator will be rounded, the X is uniformly distributed over
(–0.000005, 0.000005). Then E[X] = 0
12 12
P[0.000001 £ X £ 0.000004] = 0.000004 - 0.000001
= 0.3
0.00001
3.41 A random variable X has a continuous uniform distribution over the interval (2, 6).
(a) Determine mean, variance and standard deviation of X.
(b) Find P[X £ 2].
sX2 = (b - a ) = (b - 2) = 16 = 1.33
2 2
12 12 12
Standard deviation = s X = 1.33
4 4
1 1
P{X £ 4} = Ú f X ( x )dx = Ú
6-2
dx = (2) = 0.5
4
2 2
3.42 Suppose the time it takes to fill an application form is uniformly between 2 to 3 minutes.
(a) What is the mean and variance of the time it take to fill the form.
(b) What is the probability that is will take less than 150 sec to fill the form?
(c) Determine the cumulative distribution of the time it take to fill the form.
Solution Let X be the random variable that represents the time to fill the form. X is uniformly distributed
between 2 to 3 minutes. That is, b = 3 min and a = 2 min
fX(x) = 1 for 2 < x < 3
=0 otherwise
b+a 3+2
E[X] = = = 2.5 min
2 2
3.44 Probability Theory and Random Processes
( b - a )2 1
sX2 = Var[X] = = = 0.0833 min
12 12
2.5
P[X £ 2.5] = Ú (1) dx = 0.5
2
x a x-a x-2
FX(x) = - = = = x-2
b-a b-a b-a 3-2
FX(x) = x – 2
3.43 Find the mean and variance of the uniform discrete random variable that takes n values in the set
(1, 2, ... n) with equal probability.
1
Solution The probability =
n
Ê 1ˆ Ê 1ˆ Ê 1ˆ
The mean  xi pX ( xi ) = 1Á ˜ + 2 Á ˜ +
Ë n¯ Ë n¯
nÁ ˜
Ë n¯
i
1
[1 + 2 + n]
=
n
1 n(n + 1) n + 1
= =
n 2 2
n +1
E[X] =
2
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
E[ X 2 ] = Â xi2 pX ( xi ) = 12 Á ˜ + 22 Á ˜ + 32 Á ˜ + n2 Á ˜
i
Ë n ¯ Ë n ¯ Ë n¯ Ë n¯
1 2
= [1 + 22 + 32 + n2 ]
n
1 n(n + 1) (2 n + 1) (n + 1)(2 n + 1)
= =
n 6 6
s 2X = E[ X 2 ] - {E[ X ]2 }
(n + 1)(2n + 1) (n + 1)2 n + 1 Ï 2 n + 1 n + 1 ¸
= - = Ì - ˝
6 4 2 Ó 3 2 ˛
n + 1 Ï n - 1 ¸ n2 - 1
= Ì ˝=
2 Ó 6 ˛ 12
Practice Problems
3.25 The pdf of the time it takes to fill a water tank is (a) fX(x) = 0.05; (b) 30 £ x £ 50 minutes. Determine the mean
variance of the time to complete the filling of the water tank. (Ans: (a) 40, (b) 33.33)
Operations on One Random Variable 3.45
3.26 Suppose X has a continuous uniform distribution over the interval [–2, 2].
(a) Determine the mean, variance, and standard deviation of X.
(b) Determine the value of x such that P[–x < X < x] = 0.90. (Ans: (a) Zero (b) 1.33, (c) 1.153, x = 2.8)
(c) Find the mean and variance of the discrete uniform distribution over possible values x = n, n + 1, …, m.
REVIEW QUESTION
3.13 Derive expressions for mean and variance of a uniform random variable X over the interval (a, b).
x - mx
Let = t fi dx = sX dt and x = sXt + mX
sX
•
1 - t 2 /2
E[X] = Ú (s X t + m X ) e (s X dt )
2p s X2 - •
• •
1 2 1 - t 2 /2
= Ú s x t e- t /2
dt + mx Úe dt
2p -• 2p -•
• •
sx - t 2 /2 1 - t 2 /2
=
2p
Ú te dt +
2p
mX Úe dt
-• •
2
The first integral is zero since t e - t /2
is an odd function. Hence,
•
1 - t 2 /2
E[X] =
2p
mX Úe dt
-•
•
1 - t 2 /2
The integral
2p
Úe dt represents the area enclosed by a normal random variable N(0, 1) and is
-•
equal to unity.
3.46 Probability Theory and Random Processes
Therefore,
E[X] = mX
Variance
sX2 = E[X2] – {E[X]}2
•
E[X2] = Ú x 2 f X ( x ) dx
-•
•
1 2
/2s X2
= Ú x2 e-( x - m X ) dx
-• 2p s X2
x - mX
Let = t then dx = s X dt and x = s X t + m X
sX
•
1 2
E[X2] = Ú (s X t + m X )
2
e- t /2
(s X dt )
2p s X2 - •
• • •
s X2 2 2s X m X - t 2 /2 m X2 - t 2 /2
= Út
2
e- t /2
dt + Ú te dt + Úe dt
2p -• 2p -• 2p -•
• •
1 - t 2 /2
Ú te
2
We know Ú e- t /2
dt = 1 and dt = 0
2p -• -•
Therefore,
•
s X2 2
E[X2] = Út
2
e- t /2
dt + m X2
2p -•
Also
•
1 2
Út
2
e- t /2
dt = 1
2p -•
E[X2] = s X2 + m X2 (3.56a)
The variance
Var (X) = E[X2] – {E[X]}2
= s X2 + m X2 - m X2 = s X2 (3.56b)
Var (X) = sX2
Solved Problem
3.44 In a semester examination, the total score of a student is a Gaussian random variable X ~ N (mX;
sX2). If the average score is 600 and 15.9% students’ scores are about 750, find mx, sX and P (750 < X <
900).
Operations on One Random Variable 3.47
2
3.45 If X is a Gaussian random variable X ~ N ( m X , s X ) , find the mean and variance of Y = aX + b for
all a π 0.
Solution
Given: Y = aX + b
= a2 s2x (3.57)
Practice Problems
3.27 If X is a normal random variable with mX = 2 and sX2 = 16, find, (i) P{2 £ x £ 6}, (ii) P[|X – 2| > 6].
(Ans: (i) 0.34; (ii) 0.1296)
3.28 The compressive strength of samples of cement can be modelled by a random variable X ~ N(mX, sX2) where
mX = 400 kg/cm2 and sX = 80 kg/cm2.
(i) What is the probability that a sample’s strength is less than 450 kg/cm2?
(ii) What strength is exceeded by 90% of the samples?
REVIEW QUESTION
14. Find mean and variance of Gaussian random variable.
3.48 Probability Theory and Random Processes
• •
-l x x
= Ú x le dx = l Ú x e - l x dx lim =0
x Æ• lx
0 0 e
ÏÈ • • ˘¸
- xe - l x e- l x ˙Ô
= l ÔÌ Í + Ú dx ˝
ÔÓ ÍÎ l 0 0
l ˙˚ Ô
˛
È 1 -l x •˘
= l Í(0 - 0) - e 0 ˙
2
Î l ˚
= 1 (3.59)
l
1
mX = E[ X ] =
l
Variance
sX2 = Var[X] = E[X2] – {E[X]}2
•
E[X2] = Ú x 2 f X ( x ) dx
-•
• •
= Úx
2
l e - l x dx = l Ú x 2 e - l x dx
0 0
Ï 2 -l x • • • ¸
Ô -x e 2 xe - l x e- l x Ô
= lÌ - 2
+ 2Ú 2
dx ˝
ÔÓ l 0 l 0 0 l Ô˛
ÏÔ 2
•¸
Ô
= l Ì(0 - 0) - 2 (0 - 0) - 3 e - l x ˝
ÓÔ l 0 ˛Ô
Ê 2 ˆ 2
= lÁ ˜ = (3.60)
Ë l3 ¯ l2
2
2 Ê 1ˆ 1
Var (X) = sX2 = E[X2] – {E[X]}2 = -Á ˜ = 2 (3.61)
l 2 Ë l ¯ l
Operations on One Random Variable 3.49
Solved Problems
3.48 The pdf of the times T in weeks between employee strikes at a certain company is given by
fT(t) = 0.01 e–0.01t t ≥ 0
(a) Find P(20 < t < 40)
(b) What is the expected time between strikes at the company?
Solution Given: fT(t) = 0.01 e–0.01t t ≥ 0
t t
-0.01t
(a) FT (t ) = Ú fT (t ) dt = Ú 0.01 e dt
-• 0
t
e - 0.01t
= 0.01 = - (e -0.01t - 1)
(-0.01)
0
–0.01t
=1–e
3.50 Probability Theory and Random Processes
1 1
Mean = = = 2.885
l 0.3466
1 1
Variance = = = 8.324
l2 (0.3466)2
3.50 If X has uniform distribution in (–2, 3) and Y has exponential distribution with parameter l, find l
such that Var(X) = Var(Y).
Solution X has uniform distribution in (–2, 3).
Ï1
for - 2 < x < 3
The pdf of X is fX(x) = ÔÌ 5
Ô0 otherwise
Ó
(b - a )2 25
Var(X) = =
12 12
Y is exponential random variable with pdf
Ïl e- l x , x≥0
fX(x) = ÔÌ
ÔÓ 0 otherwise
Operations on One Random Variable 3.51
1
The variance of exponential random variable is
l2
1 25 12 2 3
fi 2
= ; l2 = fi l=
l 12 25 5
REVIEW QUESTION
15. For a random variable X with exponential distribution, derive expressions for mean and variance.
• •
2
/2s 2 2
/2s 2
= - x e- x 0
+ Ú e- x dx
0
•
1 2
/2s 2
= 2ps 2 Ú e- x dx
2
0 2ps
2ps 2 È ˘
•
1 2 2
= ÍÚ e - x /2s dx ˙
2 Í - • 2ps 2 ˙
Í ˙
Î =1 ˚
=s p
2
Note. The term in brackets is the area under pdf curve of Gaussian random variable with N((0, s2)
Variance
sX2 = E[X2] – {E[X]}2
μ •
x 2
/2s 2 1 2
/2s 2
E[X2] = Úx
2
2
e- x dx = 2 Úx
3
e- x dx
0 s s 0
3.52 Probability Theory and Random Processes
•
2
/2s 2
= Úx
2
d (-e- x )
0
• • •
2
/2s 8 2
2s 2 2
/2s 2
= - x2 e - x 0 + Ú 2 x e- x dx = 2 Ú x e - x dx
0 0
•
x 2
/2s 2
= 2s 2 Ú 2
e- x dx (3.63)
0 s
x2 2
Let = t fi x.dx = s dt
2s 2
•
E[x2] = 2 s 2 -t
Úe dt = 2s 2
0
Êpˆ
= 2s 2 - Á ˜ s 2
Ë 2¯
È p˘
= s 2 Í2 - ˙ (3.64)
Î 2˚
•
xa - 1 -x/b
= Ú x b a G(a ) e dt
0
•
1 a - x/b
=
b a G (a )
Úx dx (3.66)
0
Operations on One Random Variable 3.53
x
Let = t then dx = b dt
b
•
1
=
a Ú (b t )
a
e - t b dt
b G (a ) 0
•
b
t a e - t dt
G (a ) Ú0
= (3.67)
•
a -1
We have G(a) = Úx e - x dx (3.68)
0
•
fi G(a + 1) = Úx
a
e - x dx (3.69)
0
b
E[X] = G (a + 1) G (n) = (n - 1)!
G (a )
ba !
=ab (3.70)
(a - 1)!
Variance of Gamma Distribution
• •
1
E[X2] = Ú x 2 f X ( x ) dx = Ú x 2 xa -1 e - x / b dx
-• 0 b a G( x)
•
a +1 1
= Úx a
e - x / b dx
0 b G (a )
x
Let =t fi dx = b dt
b
•
a +1 1
fi E[X2] = Ú (b t ) e - t (b dt )
0 b a G (a )
•
b 2 ta +1 -t
= Ú G (a )
e dt
0
•
b2
t a +1e - t dt
G (a ) Ú0
=
b 2 G (a + 2) b 2 (a + 1)!
= =
G (a ) (a - 1)!
= (a + 1)a b2 (3.71)
Var(X) = E[X2] – {E[X]}2
= a(a + 1) b2 – [ab]2
= a2 b2 + ab2 – a2b2
= ab2 (3.72)
3.54 Probability Theory and Random Processes
1
x a -1
= Ú b (a , b ) x (1 - x )b -1 dx
0
1 1
1 b -1 b (a , b ) = Ú xa -1 (1 - x )b -1 dx
Úx
a
= (1 - x ) dx
b (a , b ) 0 0
1
E[X] = b (a + 1, b ) (3.74)
b (a , b )
G (a + b ) G (a + 1) G (b )
= ◊
G (a ) G (b ) G (a + 1 + b )
We have
G[a] = (a – 1)! (3.75)
(a + b - 1)! a ! (b - 1)!
E[X] = ◊
(a - 1)! (b - 1)! (a + b )!
a
= (3.76)
a+b
b(a + 2), b
•
E[X2] = Ú x 2 f X ( x ) dx
-•
1 1
1 1
Ú x 2 xa -1 (1 - x )b -1 dx = xa +1 (1 - x )b -1 dx
b (a , b ) Ú0
=
b (a , b ) 0
1
= b (a + 2, b )
b (a , b )
G (a + b ) G (a + 2) G (b )
=
G (a ) G (b ) G (a + b + 2)
(a + b - 1)! (a + 1)! a (a + 1)
= = (3.77)
(a - 1)! (a + b + 1)! (a + b + 1)(a + b )
Operations on One Random Variable 3.55
a (a + 1)(a + b ) - a 2 (a + b + 1)
=
(a + b + 1)(a + b )2
ab
= (3.78)
(a + b + 1)(a + b )2
Means and Variance of Weibull Distribution The pdf of a random variable X with Weibull distri-
bution is given by
ÏÔab x b -1 e -a x b , x > 0, a > 0 and b > 0
fX(x) = Ì (3.79)
ÔÓ 0 otherwise
•
E[X] = Ú x f X ( x ) dx
-•
• •
b b
b -a x
= Ú ab x e dx = ab Úx
b
e -a x dx
0 0
1/ b
Êtˆ
Let a xb = t fi x = Á ˜
Ëa¯
1
-1
dx = 1 Ê t ˆ b Ê 1ˆ
b ÁË a ˜¯ ÁË a ˜¯ = dt
1
• -1
E[X] = ab ÊÁ t ˆ˜ e - t 1 ÊÁ t ˆ˜
b
Ú Ë a ¯ ab Ë a ¯ dt
0
1 1 •
•
= Ê 1ˆb b -t
Úx
a -1 - x
dx = G (a )
ÁË a ˜¯ Ú (t ) e dt 0
e
0
1 1
•
= Ê 1ˆb b -t
ÁË a ˜¯ Ú (t ) e dt
0
1
= Ê 1ˆb Ê 1ˆ (3.80)
ÁË a ˜¯ G Á1 + ˜
Ë b¯
•
E[X2] = Úx
2
f X ( x ) dx
-•
3.56 Probability Theory and Random Processes
• •
b b
2 b - 1 -a x b + 1 -a x
= Ú ab x x e dx = Ú ab x e dx
0 0
b
Let ax = t
1
Then x = Ê t ˆb
ÁË a ˜¯
1
-1
dx = 1 Ê t ˆ b Ê 1ˆ
b ÁË a ˜¯ ÁË a ˜¯ dt
b +1 1
• -1
Êtˆ b -t È1˘Ê t ˆb 1
E[X ] = Ú a b Á ˜
2
e Í ˙ ÁË ˜¯ dt
0
Ëa¯ Îb ˚ a a
• 2/ b
Êtˆ
= Ú ÁË a ˜¯ e - t dt
0
2 •
•
= Ê 1ˆb 2/ b - t
Úx
a -1
e - x dx = G (a )
ÁË a ˜¯ Ú (t ) e dt 0
0
2
= Ê 1ˆb Ê 2ˆ (3.81)
ÁË a ˜¯ G Á1 + ˜
Ë b¯
n n
• -1 •
x2 x2
= Úx n
e - x /2 dx = Ú n
e - x /2 dx
0 Ê nˆ 0 2 Ê nˆ
22 GÁ ˜ 2 GÁ ˜
Ë 2¯ Ë 2¯
x
Let =t
2
•
2 n /2 t n /2 e - t
Then E[X] = Ú Ê nˆ
(2 dt )
0 2 GÁ ˜
n /2
Ë 2¯
Ê nˆ
2Á ˜!
2 Ên ˆ Ë 2¯ Ê nˆ
= G Á + 1˜ = = 2Á ˜ = n
Ê ˆ
n Ë 2 ¯ Ê n ˆ Ë 2¯
GÁ ˜ Á - 1˜ !
Ë 2¯ Ë2 ¯
E[ X ] = n (3.84)
n n
• -1 x • +1 x
x2 - x2 -
E[X ] = Ú x dx = Ú
2 2 2
2
n
e n
e dx
0 Ê nˆ 0 Ê nˆ
22 GÁ ˜ 22 GÁ ˜
Ë 2¯ Ë 2¯
x
Let = t , then
2
• n n
2 +1 +1
E[X2] = n Ú (2) 2 (t ) 2 e - t dt
Ê nˆ 0
22 GÁ ˜
Ë 2¯
Ên ˆ
4 G Á + 2˜
Ë2 ¯ Ên ˆ Ê nˆ
= = 4 Á + 1˜ Á ˜ = n(n + 2) (3.85)
Ê nˆ Ë2 ¯ Ë 2¯
GÁ ˜
Ë 2¯
Var(X) = E[X2] – {E[X]} 2
= n(n + 2) – (n)2
= 2n
var ( X ) = 2 n (3.86)
REVIEW QUESTION
16. Derive expression for mean and variance of a random variable with Rayleigh distribution.
3.58 Probability Theory and Random Processes
Ú (x - mx )
3
m3 = E[(X – mX)3] = f x ( x ) dx (3.87)
-•
For a discrete random variable,
m3 = Â ( xi - m X )3 pX ( xi ) (3.88)
i
A skew is a measure of the asymmetry of the density function of a random variable about its mean. If the
pdf is symmetric about x = mX then it has zero skew. In fact, for a random variable that has symmetric pdf
about x = mX has mn = 0 for all odd values of n.
m3
Skewness Coefficient The normalized third central moment is known as the skewness coefficient.
It is given by s 3X
m3 E[( X - m X )3 ]
Skewness coefficient = = (3.89)
s 3X E[( X - m X )3 ]
The skewness coefficient is a dimensionless quantity. It is positive if the random variable has a pdf skewed
to the right as shown in Fig 3.4(a) and negative if skewed to the pdf as shown in Fig. 3.4(c).
KURTOSIS 3.10
The fourth central moment is called kurtosis and is a measure of the peakedness of a random variable near the
mean. The coefficient kurtosis is dimensionless and is
given as
E[( X - m X )4 ]
Ck = (3.90)
s X4
The Kurtosis for a normal distribution is three. The
alternate definition for Kurtosis is
E[( X - m X )4 ]
-3 (3.91)
s X4
which is also known as excess Kurtosis. Fig. 3.5 Kurtosis
Operations on One Random Variable 3.59
This definition is used so that the normal distribution has a Kurtosis of zero.
Distributions with negative excess Kurtosis are known as Platykurtic distribution. Examples are uniform
distribution and Bernoulli distribution. Distributions with positive excess kurtosis are known as Leptokurtic.
Examples are Laplace distribution, exponential distribution and position distribution.
REVIEW QUESTIONS
17. Define skewness and skewness coefficient.
18. Define Kurtosis.
n
Ê nˆ
=  ÁË k ˜¯ mk (- m X )n - k
k=0
mn = E ÈÎ( X - m X + m X )n ˘˚
È n ˘
= E Í Â ( X - m X ) (m X )
k n-k
˙
ÍÎ k = 0 ˙˚
n
Ê nˆ
=  ÁË k ˜¯ E [( X - m X )k ]m Xn - k
k=0
n
Ê nˆ
=  ÁË k ˜¯ m Xk m Xn - k
k=0
m0 = m 0 (3.92)
3.60 Probability Theory and Random Processes
1
Ê1 ˆ
m1 =  ÁË k ˜¯ mk (- m X )1 - k
k=0
Ê1 ˆ Ê 1ˆ
= Á ˜ m0 ( - m X )1 - 0 + Á ˜ m1 ( - m X )1 - 1
Ë 0¯ Ë 1¯
= m 1 – m 0 mX (3.93)
Mean
E[X] = m1 = m1 – m1 m0
= m1(1 – m0) = 0
1
Ê 2ˆ
m2 =  ÁË k ˜¯ m k ( - m X )2 - k
k=0
Ê 2ˆ Ê 2ˆ Ê 2ˆ
= Á ˜ m0(–mX)2 + Á ˜ m1(–mX) + Á ˜ (m2)(–mX)0
Ë 0¯ Ë1 ¯ Ë 2¯
= m0(–m1)2 + 2m1(–m1) + m2 (3.94)
= m12 - 2 m12 + m2
= m2 - m12
Ê 3ˆ Ê 3ˆ Ê 3ˆ Ê 3ˆ
m3 = Á ˜ m0(–mX)3 + Á ˜ m1(–mX)2 + Á ˜ m2 (–mX)1 + Á ˜ m3(–mX)°
Ë 0¯ Ë1¯ Ë 2¯ Ë 3¯
Ê 3ˆ Ê 3ˆ Ê 3ˆ
= Á ˜ m0(–m13) + Á ˜ m1(–m1)2 + Á ˜ m2 (–m1) + Ê ˆ m3 (–m1)°
3
Ë 0¯ Ë1¯ Ë 2¯ ÁË 3˜¯
Solved Problems
3.51 Find skew and skewness coefficient for the exponential density function
Ï 1 - x /b
Ô e for x ≥ 0
fX(x) = Ì b
Ô 0 for x < 0
Ó
Solution
Ï 1 - x /b
Ô e for x ≥ 0
Given: fX(x) = Ì b
Ô 0 for x < 0
Ó
Operations on One Random Variable 3.61
• •
x - x /b
E[X] = Ú x f X ( x) dx = Ú be
μ 0
1 ÏÔ ¸Ô
•
•
- x /b
= Ì-bxe + b Ú e - x / b dx ˝
b ÔÓ 0
0 Ô˛
•¸
1 ÏÔ e - x / b Ô b2
= Ìb ˝ = =b
b ÓÔ ( -1/b) Ô b
0 ˛
E[X] = b (3.96)
• •
1 2 - x /b
Ú b Ú0
E[X2] = x 2 f X ( x ) dx = x e dx
-•
1 ÏÔ ¸Ô
•
• •
2 - x /b
= Ì - bx e 0
- 2 xb2 e - x / b 0
+ 2b2 Ú e - x / b dx ˝
b ÓÔ 0 ˛Ô
Ï - x |b
•¸
3
= 1 ÔÌ2b2 e Ô 2b
˝= = 2b 2 (3.97)
bÔ ( -1/b) 0 Ô b
Ó ˛
sX2 = E[ X 2 ] - {E[ X ]}2
= 2b2 – b2 = b2
• •
1
E[X3] = Ú x 3 f X ( x ) dx = Úx
3
e - x / b dx
-•
b 0
Ï •
• • ¸
= ÔÌ-bx 3 e - x / b Ô
1 •
- x /b - x /b
0
- 3b x e 2 2
- 6b xe
3
+ 6b3 Ú e - x / b ˝
bÔ 0 0 Ô˛
Ó 0
3
= 6b (3.98)
m3 = E[ X ] - 3
3m X s X2 - m 3X
= 6b3 – 3(b) (b2) – b3 = 2b3 (3.99)
3
mX 2b
Skewness coefficient = = =2
s 3X b3
3.52 Find kurtosis for the exponential density function given in the above problem.
Solution
• •
1 4 - x |b
E[X4] = Ú x 4 f X ( x ) dx =
b Ú0
x e dx
-•
3.62 Probability Theory and Random Processes
1 ÏÔ ¸Ô
•
• • • •
4 - x /b
= Ì-b x e 0
- 4 x 3 b2 e- x / b 0
- 12 b3 x 2 e - x / b 0
- 24 x b 4 e - x / b 0
+ 24 b 4 Úe
- x /b
dx ˝
b ÓÔ 0 ˛Ô
1
= (24 b5 ) = 24 b 4
b
E[( X - m X )4 ]
Kurtosis =
s X4
E[(X – mX)4] = E[ X 4 - 4 X 3 m X + 6 X 2 m X2 - 4 X m 3X + m X4 ]
= E[ X 4 ] - 4 m X E [ X 3 ] + 6 m X2 E[ X 2 ] - 4 m 3X E[ X ] + m X4
= 24b 4 - 4b(6 b3 ) + 6 b2 (2b2 ) - 4b3 (b) + b 4
= 9b4
4
Kurtosis = 9 b = 9
b4
3.53 Consider a random variable with Laplace distribution with a pdf given by
fX(x) = b e - b| x|
2
Find the mean, variance, skewness coefficient and kurtosis.
Solution The pdf is symmetrical about x = 0. Therefore, the mean is zero and all central moments are
equal to moments at origin. Also due to symmetry of pdf, all odd moments are zero. That is m1 = 0; m3 = 0
and m5 = 0.
s X2 = E[ X 2 ] - m X2 = E[ X 2 ]
• •
b - b| x | b
E[X2] = Ú x2 e dx = Ú x 2 e - b| x| dx
-•
2 2 -•
•
Ê bˆ
= 2Á ˜ Úx
2
e - bx dx
Ë 2¯
0
Ï 2 - bx • • • ¸
Ô -x e 2x 2 Ô
= bÌ - e - bx + 2 Ú
e - bx
dx ˝
ÔÓ b 0 b2 0 b 0 Ô˛
Ï Ê - bx •ˆ¸
Ô2 e Ô 2
= bÌ 2 Á ˜˝ =
ÔÓ b Ë -b ¯
0 Ô b2
˛
2
fi sX2 =
b2
Operations on One Random Variable 3.63
= E[X4] [∵mX = 0]
• •
b
Ú x 4 e - b| x| dx
2 -Ú•
E[X4] = x 4 f X ( x ) dx =
-•
•
= b Ú x 4 e - bx dx
0
Ï 4 - bx • • • • μ
-x e 4 x3 12 x 2 e - bx 24 xe - bx Ô̧
= b ÔÌ
24 - bx
- 2
e - bx
- 3
- 4
+
b4
Úe dx ˝
ÔÓ b 0 b 0 b 0 b 0 0 Ô˛
Ï Ê - bx ˆ •¸
Ô 24 e Ô
= bÌ 4 Á ˝
Ë -b ˜¯
ÔÓ b 0 Ô
˛
24
=
b4
E[( X - m X )4 ] 24 /b 4
Kurtosis = = =6
s X4 4/b 4
1 a /b ÏÔ - x /b
•
• ¸Ô
= e Ì - xb e a + Ú be - x / b dx ˝
b ÓÔ a ˛Ô
1 a /b
= e {ab e - a / b - b2 ( -e - a / b )}
b
= 1 e a / b e - a / b (ab + b2 ) = a + b
b
E[X] = a + b
• •
1 a/b
E[X2] = Ú x 2 f X ( x ) dx = e Úx
2
e - x / b dx
-•
b a
Ï • ¸Ô
= e a / b ÔÌ-bx 2 e - x / b
1 • •
a - 2 xb2 e - x / b a + 2b2 Ú e - x / b dx ˝
b ÔÓ a Ô˛
= 1 e a / b { e - a / b (a 2 b + 2 ab2 + 2b3 )}
b
E[X2] = a2 + 2ab + 2b2
s X2 = b2
μ •
1 a /b 3 - x /b
E[X3] = Ú x 3 f X ( x ) dx = e Úx e dx
b
-μ a
1 a /b Ï • • • •¸
= e Ì- bx 3 e - x / b a
- 3b2 x 2 e - x / b a
- 6b3 xe - x / b a - 6b 4 e - x / b a ˝
b Ó ˛
1 a /b Ï 3 -a /b ¸
= e Ìa b e + 3b2 a 2 e - a / b + 6 b3 a e - a / b + 6b 4 e - a / b ˝
b Ó ˛
= 1 {a 3 b + 3b2 a 2 + 6b3 a + 6b 4 }
b
E[X3] = a3 + 3a2b + 6ab2 + 6b3
m3
Coefficient of skewness =
s 3X
m3 = m3 - 3m X s X2 - m13
= (a3 + 3a2b + 6 ab2 + 6b3) – 3(a + b) b2 – (a + b)3
= 2b3
m3 2b3
Coefficient of skewness = = =2
s 3X b3
Operations on One Random Variable 3.65
3.55 Prove that central moments mn are related to moments mk about the origin by
n
Ê nˆ
mn =  ÁË k ˜¯ (- m X )n - k mk
k=0
Solution
mn = E[(X – mX)n]
È n Ê nˆ ˘
= E Í Â Á ˜ X k ( - m X )n - k ˙
ÎÍ k = 0 Ë k ¯ ˚˙
n
Ê nˆ
=  ÁË k ˜¯ E[ X k ](- m X )n - k
k=0
n
Ê nˆ
=  ÁË k ˜¯ (- m X )n - k mk
k=0
Solution
•
We know Ú f X ( x )dx = 1
-•
4
Ú k( x - 5 x + 8) dx = 1
2
È 3 4 ˘
x 5 2 4
kÍ - x 0 + 8 x |04 ˙ = 1
ÎÍ 3 0 2 ˚˙
È 64 ˘
fi k Í - 40 + 32 ˙ = 1
Î3 ˚
from which, k= 3
40
Ï3 2
( x - 5 x + 8) 0 £ x £ 4
So fx(x) = ÔÌ 40
Ô 0 otherwise
Ó
• 4
3 2
mn = Ú x n f X ( x ) dx = Úx
n
40
( x - 5 x + 8) dx
-• 0
3.66 Probability Theory and Random Processes
3 È ˘
4
= Í Ú ( x n + 2 - 5 x n + 1 + 8 x n ) dx ˙
40 ÎÍ 0 ˚˙
3 È 4 n + 3 5(4 n + 2 ) 8(4 n + 1 ) ˘
= Í - + ˙
40 Î n + 3 n+2 n +1 ˚
È 3 ˘
m0 = 3 Í 4 - 5(16) + 8(4)˙ = 3 ÊÁ 80 ˆ˜ = 1
40 Î 3 2 ˚ 40 Ë 6 ¯
3 È 4 4 5(4)3 8(4) 2 ˘ 3 Ê 64 ˆ
m1 = Í - + ˙= Á ˜ = 1.6
40 Î 4 3 2 ˚ 40 Ë 3 ¯
È 5 4 3˘
m2 = 3 Í 4 - 5(4) + 8(4) ˙ = 3 ÊÁ 832 ˆ˜ = 4.16
40 Î 5 4 3 ˚ 40 Ë 15 ¯
m2 = m2 - m12
= 4.16 – 2.56 = 1.6
3.57 Find the skew and coefficient of skewness for a Rayleigh random variable
fX(x) = 2 xe - x 2 / b x ≥ 0
b
=0 x<0
Solution
Given: fX(x) = 2 xe - x 2 / b x ≥ 0
b
=0 x<0
•
E[X] = mX =
Ú x f X ( x ) dx
-•
• •
Ú x d (-e )dx
Ê 2 - x2 / b ˆ - x2 / b
= Ú x ÁË b xe ˜¯ dx =
0 0
• •
2 2 2
= - x e- x /b
0
+ Ú e- x /b
dx lim x e - x /b
=0
x Æ•
0
• • 2
fi -x 2 e- x /b
Úe dx = bp Ú
E[X] = /b
dx
0 0 bp
• 2
bp e- x /b
= 2 Ú bp
dx
-•
=1
Operations on One Random Variable 3.67
E[X] = bp
2
• •
Ê2 2 ˆ
E[X2] = Ú x 2 f x ( x ) dx = Ú x 2 Á x e - x / b ˜ dx
Ëb ¯
-• 0
•
Ú x d (-e )dx
2
2 - x /b
=
0
• •
2 2
2 -x
= -x e
/b
0
+ Ú 2 x e- x /b
dx
0
•
- x2 / b
= Ú 2x e dx
0
•
x2
E[X2] = b Ú e - t dt Let = t ◊ 2 x dx = bdt
0 b
•
-t
= b (-e ) 0 = b
Ï • • ¸
3b Ô - x2 / b 2 Ô
= Ì- xe 0 + Ú e- x /b
dx ˝
2 Ô Ô˛
Ó =0 0
• 2
3b e- x /b
=
2
bp Ú bp
dx
0
3.68 Probability Theory and Random Processes
•
3b 1 2
bp Ú e- x /b
dx
= 4 bp
-•
=1
E[X3] = 3b bp
4
m3 = E[(X – mX)3]
= E[X3] – 3E[X] E[X2] + 2{E[X]}3
3/2 3/2
= b 3 p - 3 bp b + 2 (bp )
4 4 8
= p
(p - 3) b3/2
4
p
(p - 3) b3/2
m3 4
Coefficient of skewness = =
s 3X Ê4-pˆ
3/2
b3/2 Á
Ë 4 ˜¯
= 2 p (p - 3) = 0.6311
(4 - p )3/2
Practice Problem
3.29 Find the skewness and skewness coefficient of random variable X with pdf
Ï2
Ô x, 0 £ x £ 3
fX(x) = Ì 9 (Ans: – 0.0707)
Ô 0 otherwise
Ó
(mX - e ) mX + e •
= Ú ( x - m X ) f X ( x ) dx +
2
Ú ( x - m X ) f X ( x ) dx +
2
Ú ( x - m X )2 f X ( x ) dx
-• mX - e mX + e
mX + e
Since Ú ( x - m X )2 f X ( x ) dx > 0 , if we omit the second integral then we can write
mX - e
(mX - e ) •
sX2 ≥ Ú ( x - m X )2 f X ( x ) dx + Ú ( x - m X )2 f X ( x ) dx
-• mX + e
= e 2 [ PX ( X £ m X - e ) + PX ( X ≥ m X + e )]
s X2
fi P [(| X - m X | ≥ e )] £ (3.101)
e2
Solved Problems
3.58 A random variable X has a mean of 9 and a variance of 3. Use Chebyshev’s inequality to obtain an
upper bound for P[|X – 9| ≥ 3].
3.59 A random variable has an average of 10 and variance 5. Find the probability that X will be between
6 and 14.
3.60 Consider the random variable with the distribution given by P(X = k) = 2–k, k = 1, 2,….
Find the upper bound using Chebyshev’s inequality and also find the actual probability.
3.70 Probability Theory and Random Processes
Solution
1
Given P[X = k] =
2k
•
mX = E[X] = Â x P[ X = k ] = Â k 2 - k
i k =1
•
 k 2-k •
k =1 r
1/2
 kr k = (1 - r )2
k =1
= 2
=2
•
Ê 1ˆ (1 - r )r
ÁË 1 - ˜¯
2 Â k 2 r k = (1 - r )3
k =1
fi mX = 2
E[X2] = Â x 2 P [ X = k ] = Â k 2 2- k = 6
i i
Var (X) = E[X2] – {E[X]}2 = 6 – (2)2 = 2
Var ( X ) 1
P [| X - 2 | > 2] £ =
22 2
Also, P[|X – 2| > 2] = 1 – P[0 £ x £ 4]
Ê1 1 1 1ˆ
= 1 - Á + 2 + 3 + 4 ˜ = 2 -4
Ë2 2 2 2 ¯
3.61 The rainfall in Hyderabad is a normally distributed random variable with 50 cm mean and 9 cm2
variance. Find a simple upper bound on the probability that rainfall in a particular year will exceed the
mean by 5 cm.
Solution
s2
We know P [| X - m | ≥ e ] £
e2
9
P(|X – 50| ≥ 5) £
25
The actual value of probability
P(|X – 50| ≥ 5) = 2 P(X < 45)
È X - m X 45 - 50 ˘
= 2P Í <
Î s 3 ˙˚
È X - mX Ê - 5ˆ ˘
<Á
= 2P Í
Î s Ë 3 ˜¯ ˙˚
= 2 f(–1.667)
= 2[1 – f (1.667)]
= 2(1 – 0.95) = 0.1
Operations on One Random Variable 3.71
3.62 If a dice is thrown 2400 times, show that the probability that the number of sixes lies between 325
and 475 is at least 0.94.
• •
≥ Ú a f X ( x ) dx = a Ú f X ( x) dx = aP[X ≥ a]
a a
E[ X ]
fi P[X ≥ a] £
a
3.72 Probability Theory and Random Processes
Solved Problems
3.63 The number of transistors produced in a manufacturing unit during a week is a random variable
with mean 100 and variance 25.
(a) What is the probability that the week’s production will exceed 125?
(b) What is the probability that the production will be between 50 and 150 over one week?
(c) If the variance of a week’s production is equal to 40, then what can be said about the probability
that this weeks production will be between 80 and 120?
Solution
(a) Let X be the number of transistors produced in one week.
Using Markov’s inequality,
P (X > 125) £ E[ X ]
125
100 4
£ = = 0.8
125 5
(b) Using Chebyshev’s inequality,
s X2
P[|X – 100| ≥ 50] £
(50)2
25 1
£ = = 0.01
(50)2 100
3.64 Two fair dice whose faces are numbered 1 to 6 are thrown. If X is the sum of the numbers shown
up, prove that
35
P[|X – 7| ≥ 3] £
54
Also
Ê 1ˆ Ê 2ˆ Ê 3ˆ Ê 4ˆ Ê 5ˆ
E[X2] = 22 Á ˜ + 32 Á ˜ + 42 Á ˜ + 52 Á ˜ + 62 Á ˜
Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯
Ê 6ˆ Ê 5ˆ Ê 4ˆ Ê 3ˆ Ê 2 ˆ (12)2
+ (7)2 Á ˜ + (8)2 Á ˜ + (9)2 Á ˜ + (10)2 Á ˜ + (11)2 Á ˜ +
Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ Ë 36 ¯ 36
1974
=
36
1974 210
sX2 = E [ X 2 ] - { E [ X ]} 2 = - ( 7) 2 =
36 36
Using Chebyshev’s inequality,
s X2
P[|X – 7| ≥ 3] £
(3)2
210 35
£ =
(36)(9) 54
3.65 A random variable has pdf fX(x) = e–x for x ≥ 0. Show that Chebyshev in equality gives
1 . Show that actual probability is e–3.
P[| X - 1 | > 2] <
4
Solution
Given fX(x) = e–x for x ≥ 0
• •
-x •
E[X] = Ú xe dx = - xe - x
0
+ Ú e - x dx
0 0
•
= - e- x =1
0
•
E[X2] = Úx
2
e - x dx
0
•
• •
2 -x
= -x e 0
+ 2 x e- x 0 + 2 Ú e - x dx
0
•
= - 2x e -x
0 =2
s2
P[|X – 1| > 2] £
e2
3.74 Probability Theory and Random Processes
• •
-x
= Ú f X ( x) dx = Ú e dx = e -3
3 3
Practice Problems
3.30(a) A random variable has the pdf fX(x) = 3e–3x. Obtain an upper bound for P(X ≥ 1) using the Markov inequality.
Ê 1ˆ
ÁË Ans : 6 ˜¯
Ê 1ˆ
3.30(b) A random variable has the pdf fX(x) = 3e–3x, x ≥ 0. Obtain an upper bound for P[|X – E[X] ≥ 1]. ÁË Ans : 9 ˜¯
3.31 A random variable X has a mean of 6 and a variance of 3. Use the Chebyshev inequality to obtain an upper bound
P[|X – 6| ≥ 2]. (Ans: 0.75)
REVIEW QUESTIONS
19. State Chebyshev’s inequality. What is its significance?
20. State Markov’s inequality.
A typical function of this form is shown in Fig. 3.7. Under these assumptions, the inverse function X = T–1[Y]
exists and is well behaved since there is a one-to-one relation between values of Y, their corresponding values
of X. Consider a particular value y0 corresponding to the particular value x0 as shown in Fig. 3.7. The values
x0 and y0 are related by
y0 = T(x0) or x0 = T –1[x0] (3.103)
Now the probability that Y takes a value less than or equal to y0 is given by
FY(y0) = P[Y £ y0] = P[X < T–1(y0)] = FX[T–1(y0)] (3.104)
This can also be written as
FX(x0) = FY[T(x0)] (3.105)
Now differentiating Eq. (3.104) with respect to y produces
d d È
F (y ) = F (T -1 ( y0 ))˘˚
dy Y 0 dy Î X
d
fi fY(y0) = f X ÈÎT -1 ( y0 )˘˚ (T -1 y0 )
dy
For any value of X and Y, we can write
d
fY(y) = f X ÈÎT -1 ( y)˘˚ T -1 ( y) (3.106)
dy
Similarly, by differentiating Eq. (3.105), we get
dy
fX(x) = fY [T ( x )]
dx
dy
= fY ( y) (3.107)
dx
fX ( x)
or fY(y) = (3.108)
dy /dx x = T -1 ( y )
dx
or fY(y) = f X ( x ) (3.109)
dy
3.76 Probability Theory and Random Processes
Now consider a monotonic decreasing function as shown in Fig. 3.8. In this case the event
{Y £ y0} is equivalent to the event X ≥ T–1(y0). Therefore,
fX ( x)
= (3.111)
dy
dx x = T -1 ( y )
Nonmonotonic Transformation Consider a monotonic function as shown in Fig. 3.9. Let us consider
a point y = y0. Corresponding to this, there exists three values of x. That is, there are several values of x that
maps to the same point y. In this case, we cannot associate the event {Y £ y0} with the events of the form
-1
{X £ T -1 ( y0 )} or {X ≥ T ( y0 )} . To avoid this problem, we calculate the pdf of Y directly rather than first
finding the CDF. For this, the function is redrawn as shown in Fig. 3.10. Now consider an event of the
form {y £ Y £ y + dy} for an infinitesimal dy. The probability of this event is P[y £ Y £ y + dy] = fY(y)dy.
Corresponding to this event, there exist three events involving the random variable X around x1, x2 and x3.
The values of x1, x2, and x3 can be obtained by solving the equation y = T(x).
T(x)
y + dy
y
x1 x2 x3 x
x1+ dx1 x2 + dx2 x3 + dx3
The probability of this event P[y < Y < y + dy] is the sum of probability x lying between x1 and x1 + dx1,
the probability that x lies between x2 + dx2 and x2 and the probability that x3 lies between x3 and x3 + dx3. We
can write P[y £ Y £ y + dy] = P(x1 £ x £ x1 + dx1) + P(x2 + dx2 £ X £ x2) + P(x3 £ X £ x3 + dx3)
P[x1 £ X £ x1 + dx1] = fX(x1)dx1
f X ( x1 ) f (x ) f (x )
fi fY(y) dy = dy + X 2 dy + X 3 dy
| T ¢( x1 ) | | T ¢ ( x2 ) | | T ¢( x3 ) |
f X ( x1 ) f (x ) f (x )
fY(y) = + X 2 + X 3
| T ¢( x1 ) | | T ¢( x2 ) | | T ¢( x3 ) |
If there are n roots for equation y = T(x) then
f (x )
fY(y) = Â | TX¢( x n ) | (3.112)
n n
dT ( x )
T¢(xn) = (3.113)
dx x = xn
Solved Problems
Solution Given Y = aX + b. The relationship between the values of x and y of random variable X and Y
are given by
y = T(x) = ax + b
from which we can find T ¢(x) = a and
y-b
x=
a
3.78 Probability Theory and Random Processes
The pdf of Y is
f X ( xn ) f X ( xn )
fY(y) = | T ¢( x ) | = | a | y-b
n x=
a
= 1 f X ÊÁ y - b ˆ˜ (3.114)
|a| Ë a ¯
3.67 If X is a uniformly distributed random variable between X1 and X2, find fY(y) for the above
problem.
Solution
Ï 1
Ô for X1 £ x £ X 2
fX(x) = Ì X 2 - X1
Ô 0 otherwise
Ó
1 Ê y - bˆ
fY(y) = fX Á ˜
|a| Ë a ¯
For x = X1; y1 = aX1 + b
x = X2; y2 = aX2 + b
Y is also uniformly distributed in the interval (aX1 + b, aX2 + b)
Ê p pˆ
3.68 If X is uniformly distributed in Á - , ˜ , find the probability density function of Y = tan X.
Ë 2 2¯
Ê p pˆ p p
Solution Given: X ~ U Á - , ˜ . That is, b = and a = -
Ë 2 2¯ 2 2
1 1 p p
fX(x) = = for - £x£
(p / 2) - (- p / 2) p 2 2
= 0 otherwise
f X ( xn ) 1/p
fY(y) = =
| T ¢( xn ) | sec 2 x x = tan -1 y
Operations on One Random Variable 3.79
1
=
p (1 + y 2 )
1
fi fY(y) = ;-•< y<• (3.115)
p (1 + y 2 )
3.69 If X is a continuous random variable with pdf fX(x), find distribution function and pdf of random
variable Y = aX2 a > 0.
Ê ˆ Ê ˆ
= FX Á y ˜ - FX Á - y ˜
Ë a¯ Ë a¯
Ê yˆ Ê yˆ
fX Á fX Á -
d Ë a ˜¯ Ë ˜
a¯
fY(y) = [ F ( y)] = + (3.116)
dy Y 2 ay 2 ay
3.70 If the random variable X is uniformly distributed over (–1, 1) find the density function of Y =
ÊpXˆ
sin Á
Ë 2 ˜¯
Solution
ÊpXˆ
Given: Y = sin ÁË ˜.
2 ¯
The relationship between the values x and y of random variable X and Y are given by y = T(x), then
p Êp ˆ
T¢(x) = cos Á x ˜
2 Ë2 ¯
2
Also, x= sin -1 ( y)
p
1
Given: fX(x) = for - 1 £ x £ 1
2
0 otherwise
Let sin -1 y = q
1 y = sin q
= -1
p cos(sin -1 y) cos(sin y) = cos q
= 1 - y2
1
= ,-1 £ y £ 1 (3.117)
p 1 - y2
3.71 Let Y = X2. Find the pdf of Y if X is a random variable over (–1, 3).
Solution
1
fX(x) = for - 1 £ x £ 3
4
Y = T(x) = x2 fi T ¢(x) = 2x
Also, X= ± Y ,
For the given range of X, the random variable Y satisfies 1 £ y £ 9
The pdf of Y is given by
f (x )
fY(y) = Â | TX¢( x n ) |
n n
1/4 1/4
= +
| 2x | x = - y | 2x | x= y
1/4 1/4 1
= + =
2 y 2 y 4 y
1
fi fY(y) = ;1£ y £ 9 (3.118)
4 y
3.72 If X is a uniformly distributed random variable over (0, 1), find the pdf of the random variable
Y = Xn for 0 < y < 1.
Solution
Ï1 for 0 £ x £ 1
fX(x) = Ì
Ó0 otherwise
Operations on One Random Variable 3.81
T¢(x) = nxn – 1
1 1
fY(y) = n -1
= ( n - 1)/n (3.119)
x x=y 1/n ny
Practice Problems
3.32(a) If X is a random variable with pdf fX(x) then find pdf of (i) Y = |X| (ii) Y = eX (iii) Y = a/X
È f X (ln y) a f ( a /y ) ˘
Í Ans : (i) f X ( y) + f X (- y), (ii) y
, (iii) X 2 ˙
Î y ˚
Ï x2
3.32(b) The pdf of a random variable X is given by fX(x) = ÔÌ 100 -5 < x < 5
1 Ô 0 otherwise
Find pdf of Y = (12 - X ) Ó
2
Solved Problems
3.73 Find the pdf of Y = –ln(1 – X) where X is a uniform random variable over (0, 1).
Solution
Ï1 for 0 £ x £ 1
Given: fX(x) = Ì
Ó0 otherwise
and Y = –ln (1 – X)
The relationship between the values x and y of random variables X and Y are given by
Y = T(x) = –ln(1 – x)
1 1
T¢(x) = - ( -1) =
1- x 1- x
and ln (1 – x) = –y
1 – x = e–y
x = 1 – e–y
fX ( x) 1
fY(y) = = = 1 – (1 – e–y) = e–y
| T ¢( x ) | x = 1 - e- y 1/ (1 - x ) x =1- e -y
Ïe - y 0 £ y £1
fY(y) = ÔÌ
ÔÓ 0 otherwise
3.82 Probability Theory and Random Processes
Practice Problems
3.33 If X is a random variable uniformly distributed over the interval (0, p/2), find the pdf of (a) Y = sin X
(b) Z = cos X. Ê ˆ
2
Á Ans : , 0 £ y £ 1˜
ÁË p 1- y
2
˜¯
1 - x /3
3.34 If X is a random variable with an exponential pdf of the form fX(x) = e u( x ) . A new random variable is
3
Ê 1 - ( y -1)/6 ˆ
defined using the equation Y = 2X + 1. Find fY|y). ÁË Ans : 6 e ; 1 £ y £ •˜
¯
3.35 If X is a standard normal random variable, find the pdf of (i) Y = |X| (ii) Y = X2 (iii) Y = X3 (iv) Y = eX.
Ê 1 1 2 ˆ
Á Ans : (ii) e - y /2 u( y); (iv) e - (ln y ) /2 ˜
Ë 2p y 2p ¯
3.36 If X is a random variable with pdf given by
1
fX(x) =
p (1 + x 2 )
1 Ê 1 ˆ
find the PDF of Y = .
X Á Ans : 2 ˜
Ë p (1 + y ) ¯
3.37 If X~ U (0, 1) find the density of Y = ln(1/x). (Ans: e–y u(y))
Úe
ux
MX(u) = f X ( x ) dx (3.121)
-•
In either the discrete or the continuous case, MX(u) is simply the expected value of euX.
MX(u) is called the moment generating function because all the moments of X can be evaluated by
successively differentiating MX(u) and substituting u = 0. That is
Operations on One Random Variable 3.83
= Â x 2 eux pX ( x) (3.128)
x
= E[X2 euX]
MX≤(0) = E[X¢2 ] (3.129)
In general, the nth derivative of MX(u) is
M Xn (u) = E[ X n eux ] (3.130)
The same result can be proved for a continuous random variable.
2. If MX(u) is the MGF of a random variable X then the MGF of the random variable Y = kX is
MX(ku)
Proof:
MY(u) = E[euY]
= E[eukX ] = E[e X ( ku ) ]
= MX(ku) (3.131)
Proof:
M( X1 + X2 ) (u) = E[eu( X1 + X2 ) ]
= eua / b E [euX / b ]
a
u
X (u / b )
= e b E[e ]
a
u Ê uˆ
= e b M X ÁË ˜¯ (3.134)
b
6. If two random variables X and Y have MGFs such that MX(u) = MY(u) then X and Y are said to have
identical distributions.
Úe
ux
MX(u) = f X ( x ) dx
-•
Úe
|MX(u)| = ux
f X ( x ) dx
-μ
Operations on One Random Variable 3.85
We know that magnitude of an integral is less than or equal to the integral of the magnitude
• •
Ú eux f X ( x ) dx £ Ú eux f X ( x ) dx
-• -•
•
£ Ú eux f X ( x ) dx (3.135)
-•
•
Since Ú f X ( x ) dx = 1 , MX(u) is finite if | eux | is finite.
-•
Solved Problems
3.74 Find the moment-generating function of the random variable having probability density function
fX(x) = x, 0 < x < 1
= 2–x, 1 < x < 2
= 0 elsewhere
Solution
Given: fX(x) = x for 0 £ x £ 1
= 2 – x for 1 £ x £ 2
=0 otherwise
The moment-generating function
•
MX(u) = E | eux | = Ú f X ( x )eux dx
-•
1 2
Ú xe dx + Ú (2 - x )eux dx
ux
=
0 1
1 1 2 È ux 2 2˘
xeux eux 2eux xe eux ˙
= u - + -Í -
0 u2 u 1 Í u 1 u2 1 ˙
0
Î ˚
eu (eu - 1) 2 e2u 2eu È 2e2u - eu e2u eu ˘
= - + - -Í - 2 + 2˙
u u2 u u Î u u u ˚
= 1 - 2e + e
u 2u
u2
3.86 Probability Theory and Random Processes
2
Ê 1 - e2 u ˆ
= Á .
Ë u ˜¯
3.75 Prove that the moment-generating function of the random variable X having the pdf
Ï1
, -1 < x < 2
fX(x) = ÔÌ 3
ÔÓ 0 elsewhere
Ï e2 u - e - u
Ô uπ0
is given by Mx(u) = Ì 3u
Ô u=0
Ó 1
Solution
•
MX(u) = E[e ] = Úe
uX ux
f X ( x ) dx
-•
e2 u - e - u
2
1 1 ux 2
3 -Ú1
= eux dx = e -1 =
3u 3u
0
when u = 0, MX(a) = form
0
Applying L Hospital’s rule,
2 e 2 u + eu 2 +1
= = 1 for u = 0
3 u=0 3
Solution
ÔÏ P(1 - P ) x x = 0, 1, 2 ...
Given: fX(x) = Ì
ÔÓ 0 elsewhere
MX(u) = E[euX ]
• •
= Â eux f X ( x) = Â eux P(1 - P ) x = P Â eux (1 - P ) x
x x=0 x=0
È 1 ˘ P
= PÍ
u˙
=
Î 1 - (1 - P ) e ˚ 1 - (1 - P ) e
u
E[X] = M X¢ (u) |u = 0
P(1 - P ) eu P(1 - P ) eu 1- P
M X¢ (u) = and E[X] = =
{1 - (1 - P ) e }u 2
{1 - (1 - P ) eu }2 u=0
P
1- P
E[X] =
P
E[X2] = M X¢¢ (u) u = 0
1 + (1 - P ) eu
M X¢¢ (u) = P(1 - P ) eu
{1 - (1 - P ) eu }3
P(1 - P )(2 - P ) (1 - P )(2 - P )
E[X2] = M X¢¢ (u) |u = 0 = =
P3 P2
s X2 = Var(X) = E[X ] – {E[X]}2
2
2
(1 - P )(2 - P ) Ê1 - Pˆ 1- P
= -Á = [(2 - P ) - (1 - P )]
P2 Ë P ˜¯ P2
1- P
=
P2
Solution
(i) Given: Y = aX + b
Let X be a random variable with MGF MX(u) then MGF of Y = aX + b is
MY(u) = E [e( aX + b )u ]
= E ÈÎe Xau ebu ˘˚ = ebu E [e Xau ]
au
Ê uˆ
= e b MX Á ˜ .
Ë b¯
Ï 1
3.78 For the random variable X whose density function is fX(x) = Ô a£x£b
Ìb - a
Ô 0 otherwise
Ó
Determine, (i) Moment generating function, (ii) Mean, (iii) Variance.
Solution
• b b
eux 1 eux
MX(u) = Ú eux f X ( x ) dx =
Úb-a dx =
b-a u
-• a a
eub - eua
=
u(b - a )
The mean is given by
•
mX = E[X] = Ú x f X ( x ) dx
-•
b b
x 1 x2 b2 - a 2 b+a
= Ú b - a dx = b - a 2 a
=
( b - a )2
=
2
a
b+a
mX = E[X] =
2
• b b
x2 1 x3
E[X2] = Ú x 2 f X ( x ) dx = Ú
b-a
dx =
b-a 3 a
-• a
b -a3
b + ab + a
3 2 2
= =
3(b - a ) 3
sX2 = E[ X 2 ] - {E[ X ]}2
b2 + ab + a 2 (b + a )2 4b2 + 4 ab + 4 a 2 - 3a 2 - 3b2 - 6 ab
= - =
3 4 12
b2 + a 2 - 2 as (b - a )2
= =
12 12
( b - a )2
sX2 = .
12
3.79 Find the MGF of the two parameter exponential distribution whose density function is given by
fX(x) = le–l(x – a), x ≥ a and hence find mean and variance.
Operations on One Random Variable 3.89
Solution
Given fX(x) = l e - l ( x - a ) , x ≥ a
•
The MGF MX(u) = Ú f X ( x ) eux dx
-•
•
- l ( x - a ) ux
= Úle e dx
a
•
= l e al Ú e - ( l - u ) x dx
a
•
-(l - u) x
= l e al e
-( l - u) a
- l e al
= {-e - ( l - u ) a }
l -u
l
= e au
l -u
d È l ˘ d Ê e au ˆ
e au ˙ =l
du ÍÎ l - u du ÁË l - u ˜¯
=
˚ u=0 u=0
ÏÔ a(l - u) e au - e au ( -1) ¸Ô
l
= Ì ˝
ÔÓ ( l - u )2 Ô˛ u = 0
Ê a l + 1ˆ Ê al + 1ˆ
= lÁ =Á ˜
Ë l 2 ˜¯ Ë l ¯
E[X] = a + 1
l
E[X2] = M X¢¢ (u) u = 0
l
M X¢ (u) = {a(l - u) e au + e au }
( l - u )2
ÔÏ d Ê a(l - u) e + e ˆ Ô¸
au au
E[X2] = M X¢¢ (u) u = 0 = l Ì Á ˜˝
ÔÓ du Ë ( l - u )2 ¯ Ô˛
u=0
l
= {(l - u)2 [ a(-e au + (l - u) ae au ) + ae au ]
( l - u)4
- 2 e au (al - au + 1)(l - u)(-1)} u = 0
3.90 Probability Theory and Random Processes
1
= {l 2 [ a(-1 + al ) + a ] - 2(al + 1)(- l )}
l3
1 2a 2
= 3
{l 2 (a 2 l ) + 2 al 2 + 2 l} = a 2 + + 2
l l l
2a 2
E[X2] = a 2 + + 2
l l
Var [X] = E[X2] – {E[X]}2
2
Ê 2a 2 ˆ Ê 1ˆ
= Á a2 + + 2 ˜ - Áa + ˜
Ë l l ¯ Ë l¯
2a 2 1 2a
= a2 + + 2 - a2 + 2 -
l l l l
1
=
l2
1
Var (X) = .
l2
3.80 Prove that the moment-generating function of the sum of two independent random variables is the
product of their generating function.
Solution Let X1 and X2 be two independent random variables and their sum is represented by X. Let
M X1 (u) and M X2 (u) are moment generating function of X1 and X2 respectively. Then we can write
3.81 Find the moment-generating function of the random variable whose moments are mr = (r + 1)! 2r
Solution We know
È 2 3 ˘
MX(u) = E[euX] = E Í1 + ux + u2 ◊ x + u3 x + ◊◊◊˙
Î 2! 3! ˚
u2
= 1 + uE [ X ] + E[ X 2 ] + ◊◊◊
2!
Operations on One Random Variable 3.91
u2
= 1 + m1 u + m2 + ◊◊◊
2!
• • • • •
ur
= Â r ! (r + 1)!2r = Â ur (r + 1) 2r = Â (r + 1)(2u)r = Â r (2u)r + Â (2u)r
r=0 r=0 r=0 r=0 r=0
μ
1
We know  (2u)r = 1 - 2u
r=0
• • • •
d
 r (2u)r = 2u  r (2u)r - 1 = u  2r (2u)r - 1 = u  du (2u)r
r=0 r=0 r=0 r=0
d È • ˘ d È 1 ˘ 2u
= u◊ Í Â (2u) ˙ = u ◊ Í ˙=
r
du ÎÍr = 0 ˚˙ du Î 1 - 2 u ˚ (1 - 2u)
2
2u 1 1
MX(u) = + =
(1 - 2u)2 1 - 2u (1 - 2u)2
Úe
jw x
= f X ( x ) dx (3.136)
-•
From the above definition, we can find that the characteristics function is a Fourier transform of pdf where
the sign of w is positive, instead of negative. The Fourier transform of a signal gives the frequency content
of the signal. But from CF, we will not get such information. There is no connection between the frequency
variable ‘w’ and physical frequency.
From Eq. (3.136). We can also obtain the pdf using
•
1 - jwx
fX(x) =
2p Ú f X (w ) e dw
-•
3.92 Probability Theory and Random Processes
fX(w) = Â e jw x i
PX ( xi ) (3.137)
i
3.16.1 Convergence of CF
Consider the equation
•
Úe
jw x
fX(w) = f X ( x ) dx
-•
The magnitude of fX(w) is equal to
•
Úe
jw x
|fX(w)| = f X ( x ) dx
-•
We know that the value of an integral is less than or equal to the integral of its absolute value.
Applying this condition, we get
• •
Ú e jw x f X ( x ) £ Ú e jw x f X ( x ) dx
-• -•
jw x
e =1
• •
Úe Ú
jw x
fi f X ( x ) dx £ f X ( x ) dx
-• -•
•
Since Ú f X ( x ) dx = 1
-•
•
Úe
jw x
f X ( x ) dx £ 1 (3.138)
-•
Úe
jw x
= f X ( x ) dx
-•
•
We know Ú f X ( x ) dx = 1 and
-•
•
E[Xn] = Ú x n f x ( x ) dx
-•
d f X (w ) j 2 j 2w j 3 (3w 2 ) k ( j )k w k - 1
= E[ X ] + E[ X 2 ] + E[ X 3 ] + ◊◊◊ + ◊◊◊ E[ X k ] + ◊◊◊ (3.140)
dw 1! 2! 3! k!
Substitute w = 0 in the above Eq. (3.140)
j d f X (w )
E[ X ] =
1! dw w = 0
1 df X (w )
fi E[X] = (3.141)
j dw w = 0
1
= f ¢ (0)
j X
Similarly, if we differentiate Eq. (3.140), we get
d 2f X (w ) = 2 j 2 ( j )3 6w
2 E[ X 2 ] + E[ X ]3 + ◊◊
dw 2! 3!
Substituting w = 0, we get
2
2 j2 Ê 1ˆ
f X¢¢ (0) = E[ X 2 ] fi E[ X 2 ] = Á ˜ f X¢¢ (0) (3.142)
2! Ë j¯
In general,
k
Ê 1ˆ
E[ X k ] = Á ˜ f Xk (0) (3.143)
Ë j¯
Úe
jw x
= f X ( x ) dx
-•
3.94 Probability Theory and Random Processes
•
f X (w ) w = 0 = Ú f X ( x ) dx = 1
-•
2. For real, f X (w ) £ 1
Proof:
•
Úe
jw x
f X (w ) = f X ( x ) dx
-•
Úe
jw x
f X (w ) = f X ( x ) dx e jw x = 1
-•
•
£ Ú e jw x f X ( x ) dx (3.144)
-•
•
£ Ú f X ( x ) dx
-•
£1
3. f X (w ) = f X (-w )
*
Úe
jw x
Proof: fX(w) = f X ( x ) dx
-•
*
È• ˘
f*X(w) = Í Ú e jw x f X ( x ) dx ˙
ÍÎ - • ˙˚
• •
= Ú e - jw x f X ( x ) dx = Úe
( jx )( -w )
f X ( x ) dx
-• -•
= f X (- w ) (3.145)
4. If fX(–x) = fX(x) then fX(w) is real.
•
Proof: fX(w) = Ú f X ( x ) e jw x dx
-•
Re [fX(w)] + j Im [fX(w)]
• •
= Ú f X ( x ) cos w x + j Ú f X ( x )sin w x
-• -•
Operations on One Random Variable 3.95
Ú f X ( x ) cos w x = 2 Ú f X ( x )sin w x
-• 0
and
•
Ú f X ( x )sin w x = 0
-•
Therefore fX(w) is real.
5. fX(w) cannot be purely imaginary.
6. If fX(w) is a characteristic function of a random variable then fX(cw) = fcX(w).
Proof: We know fX(w) = E[ejwX] from which fX(cw) = E[ejwcX]
= fcX(w) (3.146)
7. If fX(w) is characteristic function of a random variables then the characteristic function of Y = aX
+ b is given by
fY(w) = ejwb fX(aw) (3.147)
Proof: Y = aX + b, fY(w) = E[ejwy] = E[ejw(aX + b)] = ejwaX E[ejwax] (3.148)
jwb
= e fX(wa)
8. If X1 and X2 are two independent random variables then fX1 + X2(w) = fX1(w) fX2(w)
f X1 + X2 (w ) = E[e jw ( X1 + Y2 ) = E[e jw X1 ] E[e jw X2 ]
= f X1 (w ) f X2 (w ) (3.149)
Solved Problems
3.82 Prove that the characteristic function and probability density function form a Fourier transform
pair.
Solution Consider the equation for a Fourier transform pair.
•
F(w) = Ú f (t ) e - jw t dt (3.150)
-•
•
1
f (t) =
2p Ú F (w ) e jw t dw (3.151)
-•
The characteristic function is
•
Úe
fX(w) = jw x (3.152)
f X ( x ) dx
-•
From Eq. (3.151) and Eq. (3.152), we can observe that both are similar except for the sign exponent.
•
1 - jw x
Now fX(x) =
2p Ú f X (w ) e dw (3.153)
-•
3.96 Probability Theory and Random Processes
In this case also the Eq. (3.153) is same as Eq. (3.151) except sign. Therefore we can say that
f X ( x ) FT f X (w )
3.83 Find the density function of a random variable X whose characteristic function is
1
fX(w) = e -|w | - • £ w £ •
2
1 È 1 w - jw x ˘
0 •
1
= ÍÚ e e dw + Ú e -w e - jw x dw ˙
2p Í - • 2 2 ˙˚
Î 0
1 È ˘
0 •
= Í Ú e(1- jw ) x dw + Ú e - (1 + jx )w dw ˙
4p Í - • ˙˚
Î 1
1 È 1 0
1
•˘
= Í e(1 - jx )w + e - (1 + jx )w ˙
4p ÍÎ 1 - jx -•
-(1 + jx ) 0 ˙˚
1 È 1 1 ˘ 2 1
= Í 1 - jx + 1 + jx ˙ = =
˚ 4p (1 + x ) 2p (1 + x )
4p 2 2
Î
3.84 Find the moment generating function and characteristic function of the random variable X which
has uniform distribution.
Solution
• b b
1 1 Ê 1 ˆ ux
MX(u) = Ú eux f X ( x ) dx =
b - a Úa
eux dx = ◊Á ˜ e
b - a Ë u¯
-• a
- ub
1 e ub
-e
=
b -a u
eub - e - ub
MX(u) =
u(b - a )
Characteristics function
•
Úe
jw X jw X
fX(w) = E [e ] = f X ( x ) dx
-•
Operations on One Random Variable 3.97
b
1 1 1 jw x b
Úb-ae
jw x
= dx = e a
a
(b - a ) jw
e jw b - e jw a
=
jw (b - a )
1 È0 1 ˘
fX(x) = Í Ú (1 + w ) e - jw x dw + Ú (1 - w ) e - jw x dw ˙
2p ÍÎ -1 0 ˙˚
1 È - jw x ˘
0 0 1 1
= ÍÚ e dw + Ú w e - jw x dw + Ú e - jw x dw - Ú w e - jw x dw ˙
2p Í -1 ˙˚
Î -1 0 0
0 0
1 - jw x -1 È e jx - 1
Ú e - jw x dw = = Î1 - e ˘˚ =
jx
e
-1
- jx -1
jx jx
1 - e - jx
1 1
- jw x 1 - jw x -1 È - jx ˘
Ú e dx = - jx
e =
jx
Î e - 1˚ =
jx
0 0
0 0
- jw x 0
Ú we
- jw x
dw = -w e +
1
e - jw x
-1 jx -1 x2 -1
-e jx
1
= + 2 ÈÎ1 - e jx ˘˚
jx x
1 1
-w e - jw x
1
- jw x 1
Ú w e dw = jx
+
x2
e - jw x
0 0 0
- jx - jx
= -e +
e -1
jx x2
Substituting all integral values, we get
È jx - jx - jx
(e - jx - 1) ˘
fX(x) = 1 Í e - 1 - e + 1 (1 - e jx ) + (1 - e ) + e
jx
2
- ˙
2p Î jx jx x jx jx x2 ˚
3.98 Probability Theory and Random Processes
È - jx ˘
= 1 Í 1 - e - e + 1 ˙ = 1 È 2 - 2 cos x ˘ = 1 (1 - cos x )
jx
2 Í ˙
2p Î x ˚ 2p Î x2 ˚ p x2
1
fi fX(x) = (1 - cos x )
p x2
1
3.86 The characteristic function of a random variable X is given by fX(w) = . Find the
(1 - j 2w ) N /2
mean and second moments of X.
Solution Mean
1 d
E[X] = f (w )
j dw X w=0
1
fX(w) =
(1 - j 2w ) N /2
df X (w ) d È 1 ˘
= Í N /2 ˙
dw dw Î (1 - j 2w ) ˚
N N
- -1 - -1
= - N (1 - j 2w ) 2 ( -2 j ) = jN (1 - 2 jw ) 2
2
È -N
-1˘
E[X] = 1 ÍÎ jN (1 - j 2w ) 2 ˙˚ =N
j w =0
Second moment
2
Ê 1ˆ d2
E[X2] = Á ˜ f X (w )
Ë j ¯ dw 2 w=0
d È - -1˘
N
- Í jN (1 - 2 jw ) 2 ˙
dw Í
= Î ˚˙ w =0
ÈÊ N ˆ -
N
-2 ˘
= - jN ÍÁ - - 1˜ (1 - 2 jw ) 2 ( -2 j )˙
Î Ë 2 ¯ ˚w=0
È N ˘
= - jN Í - - 1˙ [ -2 j ]
Î 2 ˚
ÊN ˆ
= 2 N Á + 1˜
Ë2 ¯
Operations on One Random Variable 3.99
2
-s w 2 /2
3.87 Find the density function of the random variable whose characteristic function is e .
Solution
•
1 - jw x
fX(x) =
2p Ú f X (w ) e dw
-•
• • 1
1 2 1 - (s 2w 2 + 2 j w x )
w 2 /2
= Ú e -s e - jw x dw = Ú e 2 dw
2p -•
2p -•
1Ê Ê jx ˆ ˆ
2 2
Ê jx ˆ
• - Á s 2w 2 + 2 j w x ) + Á ˜ - Á ˜ ˜
2 ÁË Ës¯ Ës¯ ˜
= 1 ¯
2p Úe dw
-•
2
• 1Ê jx ˆ
1 - x 2 /2s 2 - Á sw + ˜
2Ë s¯
=
2p
e Úe dw
-•
jx
Let sw + = t fi dw = dt /s
s
•
1 - x 2 /2s 2 - t 2 /2
=
2ps
e Úe dt
-•
1 2
/2s 2
È 1 •
- t 2 /2
˘
= e- x Í Úe dt ˙
2ps ÍÎ 2p -• ˙˚
=1
•
1 - t 2 /2
The term
2p
Úe dt is area enclosed by standard normal pdf. Therefore, it is equal to one.
-•
1 2
/2s 2
fi fX(x) = e- x
2ps
MGF
n
Ê nˆ
MX(u) = E[euX ] =  eux ÁË x˜¯ p x (1 - p)n - x
x=0
n
Ê nˆ
=  ÁË x˜¯ ( p ◊ eu ) x (1 - p)n - x
x=0
= [ p ◊ eu + (1 - p)]n (3.154)
MX(u) = [ p ◊ eu + (1 - p)]n
CF
n
Ê nˆ
fX(w) = E[e jw X ] =  e jw x ÁË x˜¯ p x (1 - p)n - x
x=0
n
Ê nˆ
 ÁË x˜¯ (pe jw ) (1 - p)n - x
x
= (3.155)
x=0
= [ p ◊ e jw + (1 - p)]n
•
e - l ( l eu ) x
= Â x!
x=0
• (l eu )x
= e- l  x!
x=0
È u u 2 ˘
= e - l Í1 + l e + (l e ) + ◊◊◊˙
Î 1! 2! ˚
u u
= (e - l e l e ) = e l ( e - 1)
(3.156)
CF
fX(w) = E[ejwX]
•
e- l l x •
e - l (l e jw ) x
= Â e jw X x!
= Â x!
x=0 x=0
Operations on One Random Variable 3.101
È (l e jw ) (l e jw )2 ˘
= e - l Í1 + + + ◊◊◊˙
Î 1! 2! ˚
jw
- l l (e )
= e e
jw
l (e - 1)
= e e (3.157)
a
= 1 eux eub - eua (3.158)
=
b-a u b (b - a ) u
CF
fX(w) = E[e jw X ]
• b
1
Úe f X ( x ) dx = Ú e jw X
jw X
= dx
-μ a
b-a
b
1 e jw x e jw b - e jw a
= = (3.159)
b - a jw a
jw (b - a )
1 2
/2s 2
x - mX
fX(x) = e - ( x - ux ) for - • < x < • Let =t
sX
2p s X2
MGF Then x = s X t + mX
MX(u) = E[euX] dx = s X dt
• •
1 2
/2s X2
= Ú eux f X ( x ) dx = Úe
ux
e-( x - m X ) dx
-• -• 2p s X2
3.102 Probability Theory and Random Processes
•
1 u (s X t + m X ) 2
= Úe e- t /2
s X dt
2p s X -•
Ê t 2 s X2 u2 s X2 u2 ˆ
• Á m X u + s X ut - + - ˜
1 Ë 2 2 2 ¯
=
2p
Úe dt
-•
Ê s X2 u2 ˆ • Ê t 2 s 2 u2 ˆ
1 ÁË m X u + ˜ -Á + X - s X ut ˜
2 ¯ Ë2 ¯
=
2p
e Úe 2 dt
-•
Ê s X2 u2 ˆ • - (t - ms X )
2
Let t – msX = p
1 ÁË m X u + ˜
2 ¯
=
2p
e Ú e 2 dt
dt = dp
-•
Ê s 2 u2 ˆ p2
m u+ X ˜ • -
1 ËÁ X 2 ¯
=
2p
e Úe 2 dp
-•
Ê s X2 u2 ˆ •
ÁË m X u + ˜ 1 2
2 ¯ e- P
Ú
/2
= e dp (3.160)
-• 2p
•
1 - P 2 /2
2p
Úe dp represents the area enclosed by a random variable N(0, 1) which is equal to 1. Hence,
-•
2 2
MX(u) = e m X u + s X u /2
CF
•
Úe
jw X jw X
fX(w) = E[e ] = f X ( x ) dx
-•
•
1 2
/2s X2
= Úe
jw X
e-( x - m X ) dx
-• 2p s X2
• x - mX
1 jw (s X t + m X ) 2
= ty
= Úe e-t /2
s X dt Let
sX
2p s X2 - •
s X dt
Ê t 2 s X2 ( jw 2 ) s X2 ( jw 2 ) ˆ
1
•
ÁË ju m X w + js X w t - + - ˜¯ s X t + mX
fX(w) =
2p
Ú e 2 2 2 dt
-•
Operations on One Random Variable 3.103
s X2 ( jw )2 ˆ •
1 ( jwm X + ˜¯ - ( t - jw s X )2 /2 Let t - jws X = p
=
2p
e 2
Úe dt
dt = dp
-•
s X2 ( jw )2 ˆ •
1 ( jwm X + ˜¯ - P 2 /2
=
2p
e 2
Úe dp
-•
s X2 ( jw )2
( jwm X +
= e 2 (3.161)
•
l •
-(l - u) x
= lÚ e dx = e-(l - u) x 0
0
- ( l - u )
l
= (3.162)
l -u
l
fi MX(u) =
l -u
CF
• •
fX(w) = Úe
jw x
f X ( x ) dx = Ú e jw x l e - l x dx
-• 0
•
l •
- ( l - jw ) x
= lÚ e dx = e - ( l - jw ) x 0
0
-(l - jw )
l
=
l - jw
l
fi fX(w) = (3.163)
l - jw
3.104 Probability Theory and Random Processes
MGF
• •
x 2
/2s 2
MX(u) = Úe
ux
f X ( x ) dx = Ú eux 2
e- x dx
-• 0 s
2
• 1Ê x ˆ
1 - Á - us ˜
s 2 u2 /2 2Ës ¯
=
s2
e Úxe dx
0
Êx ˆ
Let ÁË s - us ˜¯ = t fi x = s (t + us )
dx = s dt
1 2 2 ÏÔ• - t 2 /2
¸Ô
eu s Ì Ú s (t + us ) e
/2
MX(u) = s dt ˝
s2 ÔÓ 0 Ô˛
= eu2s 2 /2 ÏÔ t e - t 2 /2 dt + u s e - t 2 /2 dt Ô¸
• •
ÌÚ Ú ˝
ÔÓ 0 0 Ô˛
•
2 2 ÔÏ p Ô¸ 1 - t 2 /2
= eu s Ì1 + us Úe dt = 1
/2
˝ (3.164)
ÓÔ 2 ˛Ô 2p -•
CF
Characteristic function
fX(w) = M X (u) u = jw
2
w 2 /2
ÏÔ p ¸Ô
= e -s Ì1 + jw s ˝ (3.165)
ÓÔ 2 Ô˛
MGF
•
Úe
MX(u) = ux (3.166)
f X dx
-•
•
xa - 1 e - x / b
Úe
ux
= dx
-• b a G (a )
• • Ê 1 - ub ˆ
-Á x
1 a -1 -x/b 1 a -1 Ë b ˜¯
Úx dx = Úx
= ux
e e e dt
b a G (a ) 0 b a G (a ) 0
Ê 1 - ub ˆ b dt
Let Á x = t , then dx =
Ë b ˜¯ 1 - ub
• a -1
1 Ê bt ˆ Ê b dt ˆ
MX(u) = Ú e-t Á
b G (a ) 0 ÁË 1 - ub ˜¯
a
Ë 1 - ub ˜¯
1 b a -1 È• a -1 - t ˘ b
= ÍÚ t e dt ˙
b a G (a ) (1 - ub )a -1 ÍÎ 0 ˙˚ 1 - ub
•
1 a -1
= Út e - t dt
G (a )(1 - ub )a 0
1
= G (a )
G (a )(1 - ub )a
= (1 – ub)–a (3.167)
CF
fX(w) = M X (a ) u = jw
= (1 – jwb)-a (3.168)
MGF
n n
• -1 • -1 Ê1 ˆ
x2 - x /2 x2 ÁË 2 - u˜¯ x
Úe dx = Ú
ux
MX(u) = n
e n
e dx
0 Ê nˆ 0 Ê nˆ
22 GÁ ˜ 22 GÁ ˜
Ë 2¯ Ë 2¯
• n Ê1 ˆ
1 -1 - Á - u )˜ x
Ë2 ¯
= n Ú x2 e dx
Ê nˆ 0
22 GÁ ˜
Ë 2¯
Ê 1 - 2u ˆ 2 dt
Let Á x = t fi dx =
Ë 2 ˜¯ 1 - 2u
n n
• -1 -1
1 22 t2 2 dt
MX(u) = Ú e- t
n
Ê nˆ 0
n
-1 1 - 2u
22 GÁ ˜ (1 - 2u ) 2
Ë 2¯
• n
1 -1
= n Út2 e - t dt
Ê nˆ
G Á ˜ (1 - 2 n) 2 0
Ë 2¯
1 Ê nˆ
= GÁ ˜
Ê nˆ
n Ë 2¯
G Á ˜ (1 - 2u ) 2
Ë 2¯
n
Ê 1 ˆ2
= Á (3.170)
Ë 1 - 2u ˜¯
CF
fX(w) = M X (u) u = jw
n
Ê 1 ˆ2
= Á (3.171)
Ë 1 - j 2w ˜¯
REVIEW QUESTION
21. Show that the characteristic function of a Poisson random variable is defined by
fX(w) = exp(–b(1 – ejw))
Operations on One Random Variable 3.107
The above equation is known as Chernoff’s inequality. The Chernoff’s bound can be obtained by
minimizing the right-hand side of Eq. (3.173).
3.108 Probability Theory and Random Processes
Therefore,
- au
P[X ≥ a] £ min0 e M X (u) (3.174)
u≥
3.17.11 Cumulants
Consider a random variable X with MGF MX(u). If we take logarithm of MX(u), we obtain the generating
function of a cumulant y(u) given by
yX(u) = log [MX(u)]
Ï X 2 u 2 X 3 u3 Ô¸
= log{E[e Xu ]} = log ÔÌ E[1 + Xu + + + ◊◊◊˝ (3.175)
ÔÓ 2! 3! Ô˛
È u u2 u3 ˘
= log Í1 + E[ X ] + E[ X 2 ] + E[ X 3 ] + ◊◊◊˙ (3.176)
Î 1! 2! 3! ˚
We know
x2 x3
log(1 + x) = x - + ◊◊◊
2 3
ÏÔ u u2 u3 ¸Ô
fi yX(u) = Ì E[ X ] + E[ X 2 ] + E[ X 3 ] + ◊◊◊˝
ÓÔ 1! 2! 3! ˛Ô
2
1 ÏÔ u u2 u3 ¸Ô
- Ì E[ X ] + E[ X 2 ] + E[ X 3 ] + ◊◊◊˝
2 ÔÓ 1! 2! 3! Ô˛
3
1 ÏÔ u u2 u3 ¸Ô
+ Ì E[ X ] + E[ X 2 ] + E[ X 3 ] + ◊◊◊˝
3 ÓÔ 1! 2! 3! ˛Ô
u u2 u3
= E[ X ] + {E[ X 2 ] - ( E[ X ])2 } + [ E[ X 3 ] - 3E[ X ] E[ X 2 ] + ◊◊◊ (3.177)
1! 2! 3!
2 3
= y 1u + y 2 u + y 3 u + ◊◊◊
2! 3!
If we differentiate yX(u) with respect to u, ‘r’ times and then substitute u = 0, we get the rth cumulant
That is,
dr
yr = y X (u)
dur u=0
If r = 1
dy X (u)
y1 =
du u=0
d d 1
du
(y X (u)) = du ÈÎlog M X (u)˘˚ = M (u) M X¢ (u)
X u=0
MX(0) = 1; M X¢ (0) = m1
= E[ X ] - {E[ X ] }
2 2
= Variance of X
Solved Problems
x
Ê 1ˆ
3.88 A random variable X has probability mass function pX(x) = ÁË ˜¯ ; x = 1,2,3 …
2
Find the moment generating function, mean and variance.
Solution
x
Ê 1ˆ
Given: pX(x) = Á ˜ ; x = 1,2,3 ...
Ë 2¯
MX(u) = Â eux pX ( xi )
i
i
2 3
= eu ÊÁ 1 ˆ˜ + e2u ÊÁ 1 ˆ˜ + e3u ÊÁ 1 ˆ˜ + ◊◊◊
Ë 2¯ Ë 2¯ Ë 2¯
n n
•
Ê 1ˆ
n
Ê eu ˆ
• • Ê eu ˆ
=  e ÁË ˜¯ =  ÁË 2 ˜¯ =  ÁË 2 ˜¯ - 1
nu
n =12 n =1 n=0
•
1 eu /2 ∵ Â rn =
1
= -1=
1- e u /2
1- e u /2
n=0 1-r
3.110 Probability Theory and Random Processes
E[X] = Â xi pX ( xi )
i
2 • 3 k
Ê 1ˆ Ê 1ˆ Ê 1ˆ Ê 1ˆ
= 1 ÁË ˜¯ + 2 ÁË ˜¯ + 3 ÁË ˜¯ + ◊◊◊ =  k ÁË ˜¯
2 2 3 k =1 2
Consider
• • •
d
 ka k = a  ka k - 1 = a  da (a k )
k =1 k=0 k=0
d È • k˘ •
a
= a ÍÂ a ˙ Â k a k = (1 - a)2
da ÎÍ k = 0 ˚˙ k =1
d È 1 ˘ a
= a =
da ÍÎ 1 - a ˙˚ (1 - a )2
• k
Ê 1ˆ 1/2
Here, a =
1
;  k ÁË 2 ˜¯ = Ê 1 ˆ = 2
2 k =1
ÁË 1 - ˜¯
2
2 3 4
1 Ê 1ˆ Ê 1ˆ Ê 1ˆ
E[X2] =  xi2 pX ( xi ) = (1)2 2 + (2)2 ÁË 2 ˜¯ + 32 Á ˜ + 42 Á ˜ + ◊◊◊
Ë 2¯ Ë 2¯
i
• k
Ê 1ˆ
=  k 2 ÁË 2 ˜¯
k =1
• • •
Consider  k 2 ak =  k 2 ak = a  k ◊ k ◊ ak - 1
k =1 k=0 k=0
• •
d d È d ˘
= a  da (ka k ) = a  da ÍÎa da (a k )˙˚
k=0 k=0
d È d Ê μ kˆ˘
= a Ía Âa ˙
da Í da ÁË k = 0 ˜¯ ˙
Î ˚
d È 1 ˘ d È a ˘
= a a =a
da ÍÎ (1 - a )2 ˙˚ da ÍÎ (1 - a )2 ˙˚
È (1 - a )2 + 2 a (1 - a ) ˘ È 1 - a + 2 a ˘ a(1 + a )
= aÍ ˙ = aÍ 3 ˙
=
ÍÎ (1 - a ) 4
˙˚ Î (1 - a ) ˚ (1 - a )
3
1Ê 1ˆ 3
Á 1 + ˜¯
2Ë 2
E[X2] = 3
= 4 =6
Ê 1ˆ 1
ÁË 1 - ˜¯ 8
2
Operations on One Random Variable 3.111
n
Ê nˆ n
Ê nˆ
=  x ÁË x˜¯ p x (1 - p)n - x =  x ÁË x˜¯ p x (1 - p)n - x
x=0 x =1
n
n!
= Â x x !(n - x)! p x (1 - p)n - x
x =1
n
(n - 1)!
= np  x p x - 1 (1 - p)( n - 1) - ( x - 1)
x =1 x !(n - x )!
n
(n - 1)!
= np  p x - 1 (1 - p)( n - 1) - ( x - 1)
x = 1 ( x - 1)!( n - x )!
n
(n - 1)!
= np  p x - 1 (1 - p)( n - 1) - ( x - 1)
x =1 ( x - 1)![( n - 1) - x - 1)]!
n
Ê n - 1ˆ x - 1
= np  Á ˜¯ p (1 - p)( n - 1) - ( x - 1)
x =1 Ë x - 1
= np ( p + (1 - p)]n - 1 = np
n n
Ê nˆ
jw X
fX(w) = E[e ] = Â e jw x i
p[ xi ] =  e jw x ÁË x˜¯ p x (1 - p)n - x
i =1 x=0
n
Ê nˆ
=  ÁË x˜¯ ( pe jw ) x (1 - p)n - x
x=0
= [ pe jw + (1 - p)]n
Solution
d f X (w )
E[X] = m1 = - j
dw w = 0
È Ê a ˆ N - 1 - a (- j ) ˘
= - j ÍN Á ˜ ˙
ÎÍ Ë a - j w ¯ (a - j w )2 ˚˙
w =0
È a j˘ N
= - j ÍN 2 ˙ =
Î a ˚ a
d 2f X (w ) d È df X (w ) ˘
E[X2] = m2 = - =-
dw 2 w =0 dw ÍÎ dw ˙˚ w = 0
d È jN Ê a ˆ
N +1˘
= - Í ˙
dw ÎÍ a ÁË a - jw ˜¯ ˚˙ w = 0
jN ÏÔ - a( - j ) ¸Ô
N
Ê a ˆ
= - Ì( N + 1) Á ˜ ˝
a Ô
Ó Ë a - j w ¯ (a - j w )2 ˛Ô
w=0
jN Ï ( N + 1)(aj ) ¸ N ( N + 1)
= - Ì ˝=
a Ó a2 ˛ a2
s X2 = E[X2] – {E[X]}2
N ( N + 1) N2 N N
=
2
- = s X2 =
a a2 a2 a2
Ï 1 - x /2
, x>0
3.91 Let the random variable X have the pdf with fX(x) = ÔÌ 2
e
ÔÓ0 otherwise
Find the moment-generating function and mean about the variance of X.
Solution The MGF of X is given by
•
Úe
ux
MX(u) = E[euX] = f X ( x ) dx
-•
• • Ê1 ˆ
ux Ê 1 - x /2 ˆ 1 - ÁË - u x˜¯ x
= Ú e ÁË 2 e ˜¯ dx = 2 Ú e 2 dx
0 0
Ï Ê1 ˆ •¸
Ô - Á - u˜ x Ô
1 Ô e Ë2 ¯ Ô
= Ì ˝
2Ô Ê1 ˆ Ô
- Á - u˜
ÔÓ Ë 2 ¯ 0 Ô
˛
Operations on One Random Variable 3.113
1 1 1
= =
2 Ê1 ˆ 1 - 2u
ÁË - u˜¯
2
d È 1 ˘
Mean = M X¢ (0) =
du ÍÎ 1 - 2u ˙˚
u=0
1
= - ( -2)
(1 - 2u)2 u=0
=2
Variance of X
d Ê 2 ˆ
=
du ËÁ (1 - 2u)2 ¯˜
u=0
Ê -2 ˆ
= 2Á ( -2)˜
Ë (1 - 2u)3 ¯ u=0
=8
sX2 = E[ X 2 ] - {E[ X ]}2
= 8 – (2)2 = 4
Solution
5
Ê1 3 ˆ
Given: MX(u) = Á + eu ˜
Ë4 4 ¯
E[X] = m X¢ (u) u = 0
ÏÔ d Ê 1 3 ˆ 5 ¸Ô
= Ì du ÁË 4 + 4 e ˜¯ ˝
u
ÓÔ ˛Ô u = 0
3.114 Probability Theory and Random Processes
4
Ê1 3 ˆ Ê3 ˆ
= 5 Á + eu ˜ Á eu ˜
Ë4 4 ¯ Ë4 ¯ u=0
Ê 3 ˆ 15
= 5Á ˜ =
Ë 4¯ 4
E[X ] = M X¢¢ (u) u = 0
2
d ÏÔ 15 u Ê 1 3 u ˆ ¸Ô
4
= Ì e ÁË + e ˜¯ ˝
du Ô 4 4 4 Ô˛ u = 0
Ó
ÏÔÊ 1 3 ˆ 4 3 u ˆ Ê 3 u ˆ ¸Ô
3
15 uÊ1
ÌÁË + e ˜¯ e + 4e ÁË + e ˜¯ ÁË e ˜¯ ˝
u u
=
4 ÔÓ 4 4 4 4 4 Ô˛ u = 0
15 ÏÔÊ 1 3 ˆ Ê 1 3 ˆ 3 ¸Ô
4 3
= ÌÁË + ˜¯ + 4 ÁË + ˜¯ ˝
4 Ô 4 4 4 4 4Ô
Ó ˛
15
=
4
{1 + 3}= 15
Var (X) = E[X2] – {E[X]}2
2
= 15 - ÊÁ 15 ˆ˜ = 15
Ë 4¯ 16
If the given MGF is of a binomial random variable X,
Ê nˆ
P[X = x] = Á ˜ p x q n - x
Ë x¯
For a binomial random variable,
15
Mean = np =
4
3 1
n=5 fi p= and q =
4 4
2 3
Ê 5ˆ Ê 3ˆ Ê 1ˆ
P[X = 2] = Á ˜ ÁË ˜¯ ÁË ˜¯
Ë 2¯ 4 4
Ê 9 ˆ Ê 1 ˆ 415
= 10 Á ˜ Á ˜ =
Ë 16 ¯ Ë 64 ¯ 512
1
3.93 Show that for the uniform distribution fX(x) = - a < x < a , the moment-generating function
2a
sin hau
about the origin is .
au
Operations on One Random Variable 3.115
Solution
Ï1
Given: fX(x) = Ô ; - a < x < a
Ì 2a
Ô 0 otherwise
Ó
μ
Úe
ux
MX(u) = E[euX] = f X ( x ) dx
-μ
a a
Ê 1ˆ 1
eux Á ˜ dx = 2 a Ú e dx
ux
= Ú Ë 2a ¯ b
-a
a
=
1 ux
2 au
e =
1
2 au
(
e au - e - au )
-a
sin hau
=
au
1 5u
3.94 The MGF of a uniform distribution for a random variable X is (e - e 4u ) . Find E[X].
u
Solution
MX(u) = e - e
5u 4u
Given:
u
For a uniform distribution, the MGF is given by
1 È ebu - e au ˘
MX(u) = Í ˙
b-aÎ u ˚
Comparing the above equation with the given MGF, we find b = 5 and a = 4
b+a 9
Therefore, E[X] = =
2 2
2 n + 1 x n e -2 x
fX(x) = x≥0
n!
0 otherwise
Find the characteristic function, mean and variance.
Solution
Ï 2 n + 1 x n e -2 x
Given: fX(x) = ÔÌ n!
x≥0
Ô 0 otherwise
Ó
3.116 Probability Theory and Random Processes
•
2n + 1 - (2 - jw ) x
=
n! Úe x n dx
0
Let (2 – jw) = a,
Then
•
2n + 1 -a x
fX(w) =
n! Úe x n dx
0
n +1 n +1
2 n! Ê 2ˆ
= =Á ˜
n! (a) n +1 Ë a¯
n +1 - ( n + 1)
Ê 2 ˆ Ê 2 - jw ˆ
=Á
Ë 2 ˜¯
= Á
Ë 2 - jw ˜¯
- ( n + 1)
Ê jw ˆ
= Á1 - ˜
Ë 2 ¯
The mean is equal to
1
E[X] = f ¢(w )
j w=0
d ÏÔ È jw ˘
- ( n + 1) ¸
Ô
f¢(w) = Ì Í1 - ˙˚ ˝
dw
ÓÔ Î 2
˛Ô
- ( n + 2)
Ê jw ˆ Ê - jˆ
= - (n + 1) Á 1 - ˜ ÁË ˜¯
Ë 2 ¯ 2
j (n + 1)
f ¢(w ) w = 0 =
2
1
f ¢(w ) 1 j (n + 1) n + 1
E[X] = = =
j w =0 j 2 2
E[X] = n + 1
2
Variance
mX = E[X2] – {E[X]}2
Operations on One Random Variable 3.117
2
Ê 1ˆ
E[X2] = Á ˜ f ¢¢(w ) |w = 0
Ë j¯
- ( n + 3) 2
f≤w) = –(n + 1) [–(n + 2)] È1 - jw ˘ Ê jˆ
ÁË - ˜¯
ÍÎ 2 ˙˚ 2
-(n + 1)(n + 2)
f≤(0) =
22
(n + 1)(n + 2)
E[X2] = –f≤(0) =
22
s X2 = E[X2] – {E[X]}2
(n + 1)(n + 2) (n + 1)2
= 2
-
2 22
n +1 n +1 n +1
= [(n + 2) - (n + 1)] = s X2 =
4 4 4
f X¢ (w ) = -1 ja
( - ja ) =
(1 - jw a )2 (1 - jw a )2
1 ja
E[X] = =a
j (1 - jw a )2
w=0
f X¢¢ (w ) = d È ja ˘
=
-2( ja)( - ja )
dw ÍÎ (1 - jw a )2 ˙˚ (1 - jw a )3
-2 a 2
=
(1 - jw a )3
3.118 Probability Theory and Random Processes
f X¢¢ (w ) |w = 0 = –2a2
E[X2] = – f X¢¢ (w ) |w = 0 = 2 a 2
2 2
s X2 = Var(X) = E[X ] – {E[X]}
= 2a2 – a2 = a2
2
3.97 If the random variable X has the moment-generating function MX(u) = , determine the
variance of X. 2 - u
Solution
2
Given: MX(u) =
2-u
E[X] = M X¢ (u) u = 0
M X¢ (u) d Ê 2 ˆ 2
= =
du ÁË 2 - u ˜¯ (2 - u)2
1
E[X] = M X¢ (0) =
2
d È 2 ˘ 4
=
du ÍÎ (2 - u)2 ˙˚ (2 - u)3
M≤X(u) =
1
E[X2] = M X¢¢ (0) =
2
s X2 = E[ x 2 ] - {E[ X ]}2
2
= 1 - ÊÁ 1 ˆ˜ = 1
2 Ë 2¯ 2
1
s X2 =
2
3.98 A random variable X is uniformly distributed on (0, 6). If X is transformed to a new random
variable Y = 2(X – 3)2 – 4, find E[Y], Var [Y].
b+a 6
E[X] = = =3
2 2
s X2 = (b - a ) = (6) = 3
2 2
12 12
s X2 = E[ X ] - {E[ X ]}
2 2
• b 6
1 x4 63
E[X ] = Ú x f X ( x ) dx = Ú x 3 dx =
3 3
=
-•
60 4(6) 0 4
Ê 4ˆ Ê 3ˆ
E[Y2] = 4 Á 6 ˜ - 48 Á 6 ˜ + 200 (12) - 336 (3) + 196
Ë 5¯ Ë 4¯
= 1036.8 – 2592 + 2400 – 1008 + 196
= 32.8
s Y2 = E[Y2] – {E[Y]}2
= 32.8 – (2)2 = 28.8
3.99 A fair coin is tossed until tail appears. Find the mathematical expectation of the number of tosses.
Solution Assume X be the random variable of the event number of tosses. Since a fair coin is tossed, P(H)
1
= and P(T) = 1 .
2 2
The possible outcomes until a tail appears is T, HT, HHT, HHHT, HHHHT, HHHHHT, and so on.
3.120 Probability Theory and Random Processes
3
1 Ê 1ˆ Ê 1ˆ 1 Ê 1ˆ
P(T) = ; P(HT) = Á ˜ Á ˜ = 2 ; P(HHT) = Á ˜
2 Ë 2¯ Ë 2¯ 2 Ë 2¯
4 5 6
Ê 1ˆ Ê 1ˆ Ê 1ˆ
P(HHHT) = Á ˜ ; P(HHHHT) = Á ˜ ; P(HHHHHT) = Á ˜
Ë 2¯ Ë 2¯ Ë 2¯
E[X] = Â xi pX ( xi )
2 3 4 5
= 1 ÊÁ 1 ˆ˜ + 2 ÊÁ 1 ˆ˜ + 3 ÊÁ 1 ˆ˜ + 4 ÊÁ 1 ˆ˜ + 5 ÊÁ 1 ˆ˜ + ◊◊◊
Ë 2¯ Ë 2¯ Ë 2¯ Ë 2¯ Ë 2¯
The above expression is of the form
μ
 k ak where a =
1
2
k =1
•
a
 k ak =
k =1 (1 - a )2
1/2
=2
fi E[X] = Ê 1ˆ
2
ÁË 1 - ˜¯
2
3.100 If the mean and variance of the binomial distribution are 6 and 1.5 respectively, find
E[X – P(X ≥ 3)].
Ê 8ˆ
P(X = 0) = Á ˜ p0 q8 - 0 = (0.25)8 = 15.25 ¥10–6
Ë 0¯
Ê 8ˆ
P(X = 1) = Á ˜ (0.75)1 (0.25)7 = 366.2 ¥ 10 -6
Ë 1¯
Ê 8ˆ
P(X = 2) = Á ˜ (0.75)2 (0.25)6 = 3.845 ¥ 10 -3
Ë 2¯
P(X < 3) = 0.995
E[X – P(X ≥ 3)] = E(X) – P(X ≥ 3)
= 6 – 0.995
= 5.005
3.101 A Gaussian random variable X with mX = 0 and s X2 = 16 is applied to square law, full-wave diode
detector with transfer characteristic Y = 3X2. Find the mean value of the output voltage Y.
Solution
Given: Y = 3X2
• 2
/2 s X2
•
e- ( x - m X )
E[Y] = Ú 3 x 2 f X ( x ) dx = 3 Ú x2
s X 2p
dx
-• -•
mX = 0 and s X2 = 16
• 2
/2s X2
e- x x
Ú =x
2
E[Y] = 3 x dx Let
-• s X 2p sX
dx = sX dx
• 2
(s X2 x 2 ) e -x /2
= 3
Ú s X 2p
s X dx
-•
• 2
x 2 e -x /2
= 3 s X2 Ú 2p
dx
-•
=1
= 3s X2
= 3(16) = 48
Solution Consider
Y = E[(X – a)2] = E[X2 – 2aX + a2]
3.122 Probability Theory and Random Processes
= E[X2] – 2aE[X] + a2
dY
= –2aE[X] + 2a = 0
da
fi E[X] = a
Solution x
(a) The CDF FX(x) = Ú f X (u) du
-•
Ê ˆ 2 2
= 2Áx ˜ = x
9Ë 2 ¯ 9
FX(x) = 1 for x > 3.
•
(b) mX = E[X] = Ú x f X ( x ) dx
-•
3 3 3
Ê 2x ˆ 2 2 2 x3
= Ú Ë 9¯
x Á ˜ dx =
Ú
90
x dx =
9 3 0
=2
0
•
E[X2] = Ú x 2 f X ( x ) dx
-•
• 3
2 Ê 2x ˆ 2 3
= Ú x ÁË 9 ˜¯ dx = 9 Ú x dx
-• 0
3
2 x4 9
= =
9 4 2
0
9 1
s X2 = E[X ] – (mX)2 =
2
-4=
2 2
• 3
2
(c) pX(X ≥ m) = Ú f X ( x) dx = Ú 9 x dx
m m
Operations on One Random Variable 3.123
m m
2
pX(X £ m) = Ú f X ( x ) dx = Ú 9 x dx
0 0
Given: pX(X ≥ m) = pX(X £ m)
3 m
fi Ú x dx = Ú x dx
m 0
2
9 m m2 9
- = fi m2 = .
2 2 2 2
3
m=
2
0 •
1 bx 1
= Ú x e dx + Ú x e - bx dx
-•
a 0
a
1È ˘
0 •
= Í Ú x ebx dx + Ú x e - bx dx ˙
a Í- • ˙˚
Î 0
È 0 • • ˘
e - bx ˙
0
1 x ebx ebx Ê - x - bx ˆ
= Í - Ú b dx + ÁË b e ˜¯ + Ú b ˙ dx
aÍ b -•
Î -• 0 0 ˚
1È 1 1˘
= (0 - 0) - 2 + (0 - 0) + 2 ˙ = 0
a ÍÎ b b ˚
• •
1 - b| x |
E[X2] = Ú x 2 f X ( x ) dx = Ú x2
a
e dx
-• -•
1È ˘
• 0 •
1 2 - b| x |
= Ú x e dx = Í Ú x 2 bx
e dx + Ú x 2 e - bx dx ˙
a -• a Í- • ˙˚
Î 0
È 0 0 • • •˘
x 2 e - bx
0
1 x 2 ebx 2 xebx 2 2x 2
= Í - + e bx
- - e - bx - e - bx ˙
a ÍÎ b -• b2 -• b3 -• b 0 b2 0 b3 0
˙
˚
3.124 Probability Theory and Random Processes
= 1 È0 - 0 - 2(0 - 0) + 2 - (0 - 0) - (0 - 0) - 2 (0 - 1)˘
a ÍÎ b3 b3
˙
˚
4
=
a b3
4
E[ X 2 ] =
ab3
Ï5
(1 - x 4 ) 0 < x £ 1
fX(x) = ÔÌ 4
ÔÓ 0 elsewhere
Find E[X], E[X2] and variance.
Solution
Ï5
(1 - x 4 ) 0 < x £ 1
Given: fX(x) = ÔÌ 4
ÔÓ 0 elsewhere
• 1
5
Ú Ú 4 x (1 - x
4
E[X] = x f X ( x ) dx = ) dx
-• 0
1
5 È x2 x6 ˘ 5 È 1 1 ˘ 5 Ê 3 - 1ˆ 5
= Í - ˙ = Í - ˙= Á ˜=
4Î 2 6 ˚0 4 Î 2 6 ˚ 4 Ë 6 ¯ 12
5
\ E[X2] =
12
• 1
1
5 5 Ê x3 x7 ˆ
E[X ] = Ú x f X ( x ) dx = Ú x 2 (1 - x 4 ) dx = Á
2 2
- ˜
-• 0
4 4Ë 3 7¯ 0
= 5 È1 - 1˘ = 5 È 4 ˘ = 5
4 ÍÎ 3 7 ˙˚ 4 ÍÎ 21 ˙˚ 21
5
\ E[X2] =
21
Variance s X2 = E[X2] – {E[X]}2
2
5 Ê 5ˆ 5 25
= -Á ˜ = -
21 Ë 12 ¯ 21 144
= 0.6448
Operations on One Random Variable 3.125
3.106 In a hall, three ACs are operational with probabilities 0.75, 0.85 and 0.95 respectively. The
operation of each AC is independent of each other. Let X be the number of ACs that are operational. Find
the mean and variance of X.
Solution X is the number of ACs that are in operation. So it takes four values X = 0, X = 1, X = 2 and X
= 3.
= 6.865
2 2
s x2 = Var(X) = E[X ] – {E[X]}
= 6.865 – (2.55)2 = 0.3625
3.107 In a village, a sales woman is selling soap packets house to house. The probability that she will
sell a soap packet at any house she visit is 0.25.
(a) If she visited 10 houses, what is the probability that she sold soap packets exactly six of these
houses?
(b) If she visited 10 houses, what is the expected number of packets she sold?
(c) What is the probability that she sold fourth packet in the eighth house she visited?
Solution Let X denote the number of packets the sales woman sold.
(a) X is binomial random variable with the probability of success 0.25. The probability that she sell
exactly six packets in 10 houses is
Ê 10ˆ
P(X = 6) = Á ˜ (0.25) (0.75) = 0.0162
6 4
Ë 6¯
(b) The expected number of soap packets that she sold after visiting 10 houses is
E[X] = np = 10(0.25) = 2.5
(c) If she sells fourth packet in the eighth house she visit, this means that she sold 3 packets during her
visit of seven houses.
3.126 Probability Theory and Random Processes
Ê 7ˆ
= Á ˜ (0.25)4 (0.75)4 = 0.043
Ë 3¯
3.108 If X and Y are independent Poisson variates, show that conditional distribution of X given X + Y
is binomial.
Solution Since X and Y are independent Poisson variates, X + Y is also Poisson random variable.
Let X be distributed with parameter lx and Y be distributed with parameter ly. Then X + Y is Poisson random
variable with parameter lx + ly.
P( X = x « Y = z - x )
P(X = x | X + Y = z) =
P( X + Y = z)
P( X = x ) P(Y = z - x )
=
P( X + Y = z)
-l
z- x
e- l x (l x ) x e (l y )
y
◊
x! ( z - x )!
=
- (lx + ly )
e (l x + l y )z
z!
z! (l x ) x (l y )z - x z! (l x ) x (l y )z - x
= =
x ! ( z - x )! (l x + l y )z x ! ( z - x )! (l x + l y ) x (l x + l y )z - x
x z-x
Ê z ˆ Ê lx ˆ Ê ly ˆ
= Á ˜Á ˜ Á ˜
Ë x¯ Ë l x + l y ¯ Ë l x + l y ¯
Ê zˆ
= Á ˜ p x (1 - p)z - x
Ë x¯
lx
Therefore the conditional distribution is a binomial distribution with p =
lx + ly
3.109 A random sample of size n is taken from a general Gamma distribution with parameters l and a.
Show that the mean mX of the sample also follows a Gamma distribution with parameter l and na.
Ê uˆ
= M X 1 + X2 + + Xn ÁË n ˜¯
Ê uˆ Ê uˆ Ê uˆ
= M X1 Á ˜ M X2 ÁË n ˜¯ M Xn Á ˜
Ë n¯ Ë n¯
a a a
Ê l ˆ Ê l ˆ Ê l ˆ
=
Á u˜ Á u˜ Á u˜
ÁË l - ˜¯ ÁË l - ˜¯ ÁË l - ˜¯
n n n
na
Ê l ˆ
=
Á u˜
ÁË l - ˜¯
n
Therefore mX follows Gamma distribution with parameter l and na.
4
Ê1 2 ˆ
3.111 A discrete random variable X has MGF Á + eu ˜ . Find E(X), Var(X) and P(X = 3).
Ë3 3 ¯
1 2
Solution Comparing with MGF of binomial random variable, we get q = ; p = and n = 4
3 3
Ê 2ˆ 8
E[X] = np = 4 Á ˜ =
Ë 3¯ 3
Ê 1ˆ Ê 2ˆ 8
Var(X) = npq = 4 Á ˜ Á ˜ =
Ë 3¯ Ë 3¯ 9
Ê nˆ
P(X = x) = Á ˜ p x q n - x ; x = 0, 1, 2, .. n
Ë x¯
2 2
Ê 4ˆ Ê 2 ˆ Ê 1 ˆ Ê 4ˆ Ê 1ˆ 8
P(X = 2) = Á ˜ Á ˜ Á ˜ = 6 Á ˜ Á ˜ =
Ë 2¯ Ë 3 ¯ Ë 3 ¯ Ë 9 ¯ Ë 9 ¯ 27
Ê nˆ
=  ( x - np)r ÁË x˜¯ p x (1 - p)n - x
Differentiating with respect to p we get,
d mr d ÔÏ r Ê nˆ x ¸
n- x Ô
= ÌÂ ( x - np) Á ˜ p (1 - p) ˝
dp dp ÓÔ Ë x¯ ˛Ô
Ê nˆ
=  ÁË x˜¯ {( x - np)r -1 (-rn) p x (1 - p)n - x + ( x - np)r xp x -1 (1 - p)n - x
Ê nˆ Ê nˆ
= - rn  Á ˜ ( x - np)r -1 p x (1 - p)n - x +  Á ˜ ( x - np)r p x -1 (1 - p)n - x -1{x(1 - p) - ( p(n - x )}
Ë ¯
x Ë x¯
mr - 1
Ê nˆ
= - rn mr -1 + Â Á ˜ ( x - np)r p x -1 (1 - p)n - x -1 ( x - np)
Ë x¯
Operations on One Random Variable 3.129
1 Ê nˆ
= -rn mr -1 +
pq
 ÁË x˜¯ ( x - np)r +1 p x q n - x
mr + 1
1
= - rn mr -1 + m
pq r +1
È d mr ˘
fi mr + 1 = pq Í + nr mr -1 ˙
Î dp ˚
È dm ˘
m2 = m r + 1 = pq Í 1 + n m0 ˙
r =1
Î dp ˚
We know m1 = 0 and m0 = 1
m2 = npq = np(1 – p)
È dm ˘
m3 = m r + 1 = pq Í 2 + 2 n m1 ˙
r =2
Î dp ˚
Ïd ¸
= pq Ì [ np(1 - p)]˝
Ó dp ˛
= npq(1 – 2p)
m4 = m r + 1 = npq [1 + 3 pq (n - 2)]
r =3
3.113 The MGF of a discrete random variable X taking values 1, 2, ..., • is eu(5 – 4eu)–1, find the mean
and variance of X.
Solution Given:
eu 0.2 eu
MX(u) = = (3.179)
5 - 4 eu 1 - 0.8 eu
The MGF of a geometric random variable is given by
peu
MX(u) = (3.180)
1 - qeu
Comparing Eq. (3.179) and Eq. (3.180), we get
p = 0.2 and q = 0.8
1 1
Mean = = =5
p 0.2
q 0.8
Variance = = = 20
p2 (0.2)2
3.130 Probability Theory and Random Processes
Solution Given:
r
Ê 1ˆ
P(X = r) = 2 Á ˜ , r = 1, 2, ... •
Ë 3¯
r -1
2 Ê 1ˆ
= ; r = 1, 2, •
3 ÁË 3 ˜¯
Comparing with geometric distribution
P(X = r) = pqr–1 r = 1, 2, ... •
2 1
We get, p= and q =
3 3
1 3
The mean = = = 1.5
p 2
q 1/ 3 1 9 3
The variance = 2
= = ◊ =
p 4/9 3 4 4
3.115 If the MGF of a random variable X is of the form (0.25 eu + 0.75)6, find the MGF of 2X + 1.
Solution The MGF of binomial random varible X is given by
MX(u) = (q + peu)n
Given: MX(u) = (0.25eu + 0.75)6
Hence, p = 0.25; q = 0.75 and n = 6
The MGF of 2X + 1 is given by
Ê 1ˆ Ê 1ˆ
3.116 If X follows B Á 3, ˜ and Y follows B Á 5, ˜ . Find P(X + Y ≥ 1).
Ë 3¯ Ë 3¯
Solution Given:
1
n1 = 3 and p1 =
3
Operations on One Random Variable 3.131
1
n2 = 5 and p2 =
3
Since p1 = p2; X + Y follows binomial distribution with n = n1 + n2 = 8. Therefore X + Y follows binomial
1 2
distribution with n = 8 and p = . We can find that q = .
3 3
Ê nˆ
P(X + Y = x) = Á ˜ p x q n - x
Ë x¯
x n- x
Ê 8ˆ Ê 1 ˆ Ê 2 ˆ
ÁË x ˜¯ ÁË 3 ˜¯ ÁË 3 ˜¯
P(X + Y ≥ 1) = 1 – P(X + Y = 0)
ÏÊ 8ˆ Ê 1 ˆ 0 Ê 2 ˆ 8 ¸
= 1 - ÔÌÁ ˜ Á ˜ Á ˜ Ô˝
ÔÓË 0¯ Ë 3 ¯ Ë 3 ¯ Ô˛
8
Ê 2ˆ
= 1 - Á ˜ = 0.961
Ë 3¯
X-l
3.117 If X is a Poisson variable with mean l, show that is a variable with zero mean and unit
variance. l
Solution Since X is a Poisson variable, the mean and variance are equal and is given by
E[X] = Var[X] = l.
X-l
The mean of random variable is given by
l
ÈX - l˘ 1 l l l
EÍ ˙ = E[ X ] - = - =0
Î l ˚ l l l l
ÈX - l˘ 1
Var Í ˙ = Var ( X - l )
Î l ˚ l
1 1
= {Var ( X ) - Var(l ) = ◊ (l - 0) = 1
l l
X-l
Therefore, is a Poisson random variable with zero mean and unit variance.
l
u
3.119 The MGF of a discrete random variable X is e9(e - 1) , find P(X = l + s) if l and s are the mean
and standard deviation of X.
Solution The MGF of a discrete random variable X is
u
- 1)
MX(u) = e l (e
u
- 1)
Comparing with the given MX(u) = e9(e we obtain l = 9
For a Poisson random variable
E[X] = Var(X) = l = 9
The standard deviation s = Var(X ) = 9 = 3
e - l (l )12
P(X = l + s) = P(X = 12) =
12!
l=9
e -9 (9)12
= = 0.07276
12!
3.120 If the random variable X follows an exponential distribution with parameter 2 prove that Y = X3
follows a Weibull distribution with parameters 2 and 1/3.
Solution Given:
fX(x) = 2e–lx; x ≥ 0
= 0; x < 0
The new random variable
Y = X3
Let y = x3, then x = y1/3
fX ( x)
fY(y) =
T ¢( x ) x = y1/3
2 e -2 x 2 -2/3 -2 y1/3
fY(y) = 2
= y e
3x 3
x = y1/3
2 -2/3 -2 y1/3
fi fY ( y) = y e
3
1
Which is the Weibull distribution with a = 2 and b = .
3
Operations on One Random Variable 3.133
C lr
P(X = r) = , r = 0, 1, 2, ...
r!
•
We have  P( X = r ) = 1
r =0
e- l l 2
P(X = 0) = = e- l
r!
r=0
-l
Ê l2 ˆ
= 1 - e Á1 + l +
Ë 2 ˜¯
3.122 A class has 12 boys and 8 girls. If the teacher randomly selects a group of 12 students to represent
the class in a competition. What is the probability that six members of the group are girls? What is the
expected number of boys in the group?
Ê 20ˆ
Solution The total number of students is 20. A group of 12 students can be selected in Á ˜ ways.
Ë 12 ¯
Ê 8ˆ
Since six members of the group are girls, remaining six students are boys. 6 girls can be selected in Á ˜ ways
Ë 6¯
Ê 12ˆ
and six boys can be selected in Á ˜ ways. Therefore the required probability is given by
Ë 6¯
3.134 Probability Theory and Random Processes
Ê 12ˆ Ê 8ˆ
ÁË 6 ˜¯ ÁË 6˜¯
p= = 0.2054
Ê 20ˆ
ÁË 12 ˜¯
12
The probability that a randomly selected student is a boy is p = = 0.6 .
20
The expected number of boys in the selected group of 12 students is 12(0.6) = 7.2
3.123 A purse contains 10 ` 2 coins and 2 ` 5 coins. Let X be the total amount that results when two
coins are drawn from the purse without replacement.
(a) Plot CDF of random variable X
(b) Find mean and variance of X.
Solution The purse contains 10 ` 2 coins and 2 ` 5 coins. When two coins are drawn at random
without replacement, the possible outcomes for the total amount are ` 4, ` 7, and ` 10. X represents the total
amount for two draws.
Let E1 be the event of first draw and E2 be the event of second draw.
The total amount is equal to 4 when we get ` 2 coins for the two draws. Therefore,
P(X = 4) = P(getting ` 2 coin in first draw). P(getting ` 2 coin in second draw)
10 Ê 9 ˆ 45
= ◊Á ˜ =
12 Ë 11¯ 66
Similarly,
P(X = 7) = P(E1 = 2) ◊ P(E2 = 5) + P(E1 = 5) P(E2 = 2)
10 2 2 10
= ◊ + ◊
12 12 12 11
40
=
132
P(X = 10) = P(E1 = 5) P(E2 = 5)
2 1 2
= ◊ =
12 11 132
The cdf of random variable X is shown in Fig. 3.12.
Fig. 3.12
Operations on One Random Variable 3.135
Ê 45 ˆ Ê 40 ˆ Ê 2 ˆ
= 4Á ˜ + 7Á + 10 Á =5
Ë 66 ¯ Ë 132 ¯˜ Ë 132 ¯˜
45 40 2
E[X2] = (4)2 + (7)2 + (10)2
66 132 132
= 27.27
sX2 = E[X2] – E[X]2
= 27.27 – (5)2 = 2.2727
3.124 The random variable X is uniformly distributed in the interval (–2, 2)
(a) Find and plot the CDF of X.
È 1 ˘
(b) Use the CDF to find P(X £ 0) and P Í X - < 1˙
Î 2 ˚
Solution Given: X is uniformly distributed in the interval (–2, 2). The pdf is given by
1
fX(x) = for - 2 £ x £ 2
2 - (-2)
0 otherwise
1
fX(x) = for - 2 £ x £ 2
4
= 0 otherwise
The CDF is given by
x x
1
FX(x) = Ú f X ( x ) dx =
4
x
-2
-2
1
= ( x + 2); for - 2 £ x £ 2
4
1
FX(x) = ( x + 2); - 2 £ x £ 2
4
The plot of CDF is shown in Fig. 3.13.
FX(x)
1
0.75
0.5
0.25
–2 –1 0 1 2 3 x
Fig. 3.13
3.136 Probability Theory and Random Processes
1
PX(X < 0) = FX(0) =
2
Ê 1 ˆ
p Á X- < 1˜
Ë 2 ¯
È Ê 1ˆ ˘
= p Í-1 < Á X - ˜ < 1˙
Î Ë 2¯ ˚
Ê 1 3ˆ
= pÁ- < X < ˜
Ë 2 2¯
Ê 3ˆ Ê 1ˆ 1 Ê3 ˆ 1Ê 1ˆ 1
FX Á ˜ - FX Á - ˜ = ÁË 2 + 2˜¯ - 4 ÁË 2 - =
Ë 2¯ Ë 2¯ 4 2 ˜¯ 2
REVIEW QUESTIONS
22. State and prove properties of variance of a random variable.
23. Explain the following terms; (i) Expectation, (ii) Conditional expected value, (iii) Covariance.
24. Explain the following terms:
(i) Variance (ii) Skew (iii) Kurtosis (iv) Moment
25. Find the nth moment of a uniform random variable and hence its mean.
26. State the Chebyshev’s inequality.
27. What is characteristic function?
28. State and prove properties of characteristic functions of a random variable X.
29. Explain about the moment-generating function of a random variable.
30. Explain in detail about transformation.
31. Explain in detail about the nonmonotonic transformation of a continuous random variable.
32. Find the mean and variance of a uniform distribution.
33. Find the mean and variance of a Laplace distribution.
34. Find the mean and variance of a Gaussian distribution.
35. Find the mean and variance of an exponential distribution.
36. Find the mean and variance of a binomial distribution.
37. Find the mean and variance of a geometric distribution.
38. Find MGF and CF of a random variable with following distributions: (i) Uniform (ii) Binomial (iii)
Normal (iv) Exponential (v) Gamma (vi) Chi-square
39. If MX(u) is the MGF of a random variable X about the origin, then show that
d r M X (u)
E[Xr] =
dur u =0
2
40. Show that E[(X – a) ] is minimized at a = E(x).
41. If X is an exponential distributions with parameter x1, show that f = x1/b has a Weibull distribution.
42. Find the rth moment of a random variable X in terms of its characteristic function.
Operations on One Random Variable 3.137
EXERCISES
Problems
1. Given a random variable X and its density function
fX ( x) = 1 0 < x < 1
=0 otherwise
Evaluate E[X]. [Ans. 1/2]
Ê 4 ˆ
2. If a random variable X is uniformly distributed over (–a, 3a). Find the variance of X. Á Ans : a 2 ˜
Ë 3 ¯
3. For a Poisson random variable X with parameter l show that E[X(X – 1)] = l2 and E[X(X – 1)
(X – 2)] = l3.
4. X is a uniformly distributed random variable over (–1, 1). Find the expected value of
ÊpXˆ
Y = sin Á and Z = |X|.
Ë 2 ˜¯
ÏÔ5e -5 x 0£ x£•
5. The density function of a random variable X is gX(x) = Ì
ÔÓ 0 elsewhere
Find, (i) E[X], (ii) E[(X – 1)2], (iii) E[3X – 1]
Ê 1 17 2 ˆ
ÁË Ans. 5 , 25 , 5 ˜¯
6. The probabilities of students X, Y and Z passing an examination are 0.6, 0.8 and 0.95 respectively.
Let N be the number of students passing the examination. Find the mean and variance of N.
7. The temperature of a vessel is measured four times to be t = 99, 100, 101, 102 with the probabilities
of correct measurement 0.2, 0.3, 0.4 and 0.1 respectively. Find the expected value and variance of
the temperature.
8. Find the mean and variance of the random variable X with pdf . Ê 1 2ˆ
ÁË Ans : 3 , 9 ˜¯
Ï1
Ô ( x + 1) for - 1 £ x £ 1
fX ( x) = Ì 2
ÔÓ 0 otherwise
Ê 1 ˆ
ÁË Ans : 2u sin h 2u˜¯
1
14. If X is a discrate random variable with probability function PX(x) = x = 1, 2, 3 find
kx
(i) MGF of X, (ii) Mean and variance of X.
Ê eu k ˆ
Á Ans : ; ˜
Ë k - e (k - 1)2 ¯
u
Ï x - x /2
x>0
Find the MGF of the random variable X having the pdf fX(x) = ÔÌ 4
e ,
15.
ÔÓ 0 elsewhere
Ê 1 ˆ
Also deduce the first four moments about the origin. Á Ans : , 4, 24, 192, 1920˜
Ë (1 - 2u) 2
¯
16. If a random variable has an exponential distribution with parameter 1, find the pdf of Y = X.
2
( Ans : fY ( y) = 2 y e - y ; y > 0)
17. Given a random variable X with pdf
Ï1 - x -1 £ x £ 1
fX(x) = Ì
Ó 0 elsewhere
Find the mean and variance of X. (Ans: 0, 1/6)
18. Given an exponential distributed random variable X with parameter l = 1. Find the mean of
Y = 2x e–3x
19. X is a poisson random variable with parameter l and P(X = 1) = P(X = 2). Find the mean and
variance of X and probability P(X = 3).
20. Given a random variable X ~ U(0, 1), determine the constants c and d such that Y = (X + d) is
uniformly distributed over (a, b).
21. The pdf of a random variable X is given by
1
fX(x) = for 2 £ x £ 5
3
Find the pdf of Y = 3X – 5.
Operations on One Random Variable 3.139
23. Find the characteristic function of a random variable with density function
Ïx
for 0 £ x £ 2 Ê ˆ
fX(x) = ÔÌ 2
1
ÁË Ans : [e j 2w (1 - 2 jw ) - 1˜
2w 2 ¯
ÔÓ 0 otherwise
24. X is a uniformly distributed random variable over (–1, 1). Let Y = 4 – X2. Find the pdf of Y.
Ê 1 -1/2 ˆ
ÁË Ans : 2 (4 - y) ˜¯
25. The current having thrown a 2W resistor is uniformly distributed over the interval (9, 11). Find the
Ê 1 1/2 ˆ
pdf of the power P. ÁË Ans : 8 (2 p) ˜¯
1 5u
26. The MGF of a random variable X is (e - e 4u ) . Find E[X].
u
pX
27. If X is uniformly distributed in (–1, 1), then find the pdf of Y = sin .
2
28. X is random variable with mX = 0 and variance sX2. If Y is a random variable defined as Y = X3.
Find fY(y).
29. X is a random variable with Chi-square distribution with pdf
n x
-
x2 e 2
fX(x) = n
x>0
Ê nˆ
22 GÁ ˜
Ë 2¯
=0 elsewhere
Find pdf of Y = X.
30. X is a Laplace random variable with pdf.
1 - | x |a
fX(x) = e . Find the characteristic function
2a
e - jmw
31. The characteristic function of the Laplace density is given by fX(w) = ,
find mean and variance. (1 + bw )2
3.140 Probability Theory and Random Processes
Multiple-Choice Questions
1. For a random variable X, following the pdf fX(x) shown in Fig. 3.14 the mean and variance are,
respectively
Fig. 3.14
1 1 1 1
(a) (b) (c) (d)
2 4 5 3
12. Var (aX + bY) =
(a) a Var (X) + b var (Y) + 2 Cov (X, Y)
(b) a2 Var (X) + b2 var (Y)
(c) a2 Var (X) + b2 var (X) + 2ab Cov (X, Y)
(d) a2 Var (X) + b2 var (Y) – 2ab
13. A continuous random variable X has a pdf
1 2 -x
fX ( x) =
x e for x > 0
2
The mean and variance are respectively.
(a) 3 and 2 (b) 3 and 4 (c) 2 and 3 (d) 3 and 3
14. The characteristic function of a Chi-square random variable is given by
1 1
(a) fX(w) = (b)
1 + 2 jw (1 - 2 jw )n
1 1
(c) (d)
(1 - 2 jw )n /2 (1 + 2 jw ) N /2
15. The MGF of exponential random variable with l = 1 is given by
1 1 1 1
(a) (b) (c) (d)
(1 - u) 2 1- u 1+ u (1 + u)2
16. The PMF of a random variable X is given by
-l x
pX(x) = e l , l = 0, 1, 2
x!
The value of P(X ≥ 2) is
(a) Zero (b) l2 + 2l
l 2 e- l
(c) 1 – e–l – l2 + 2l (d) 1 – e–l – le–l –
2
3.142 Probability Theory and Random Processes
26. If X is a random variable with finite mean mX and variance sX2, then for any value k > 0, Chebyshev’s
inequality states that
s X2 s X2
(a) P{| X - m X | ≥ k} £ (b) P{| X - m X | £ k} £
k2 k2
Ï k¸ s2 k
(c) P Ì| X - m X | > ˝ £ X (d) P{| X - m X |} ≥ 2
Ó e˛ k 2 s X
27. A dice is rolled until that the total sum of all rolls exceeds 300. the probability that at least 80 rolls
are needed to achieve sum 300 is
(a) 0.9431 (b) 0.8632 (c) 0.9122 (d) 0.9731
28. X is a random variable with mX = 3 and sX = 2 16 . The upper bound for P{|X – 3| ≥ 1]}
3
11 7 19
(a) 16 (b) (c) (d)
3 3 3 25
30. Which of the following is used to measure asymmetry of pdf
(a) Mean (b) Skew (c) Variance (d) Kurtosis
31. Which of the following is used to measure the peakedness of a random variable near the mean
(a) Kurtosis (b) Skew (c) mean (d) Variance
INTRODUCTION 4.1
In the previous chapter we studied the properties of a single random variable defined on a given sample
space. But in many random experiments, we may have to deal with two or more random variables defined on
the same sample space. For example, in analysing an electric circuit, current (i) and voltage (v) at different
points in a circuit may be of our interest and we would consider (i, v) as a single experimental outcome. In
another example, the hardness (h) and the tensile (t) strength of a steel piece may be of interest. In this case,
the outcome of the experiment can be represented as (h, t). Let us consider another example in which we
collect the details of students. If we concentrate only on the age of students then we deal with a single random
variable. On the other hand, if we collect details like age, weight and height then we deal with multiple
random variables.
In this chapter, we extend the theory we studied for a single random variable to few or more random
variables. We first consider the bivariate random variable and later consider processes with more than two
random variables.
REVIEW QUESTIONS
1. Define vector random variable.
2. Define two-dimensional random variable.
Solved Problems
4.1 Two events A and B defined on a sample space S are related to a joint sample space through random
variables X and Y. The events are defined as A = {x1 £ X £ x2} and B = {Y £ y}. Make a sketch of two sample
spaces showing areas corresponding to both events and the event A « B = {x1 < X £ x2, Y £ y}.
Solution The events are defined as A = {x1 £ X £ x2} and B = {Y £ y}
The events A and B refer to the sample space s, while events {x1 £ X £ x2} and {Y £ y} refer to the joint
sample space Sj. Event A corresponds to all points in Sj for which the X coordinate values lie between x1 and
x2, and the event B corresponds to the Y coordinate values in Sj not exceeding y. Our interest is in the event A
« B in S. This event A « B = {x1 £ X £ x2, Y £ y} defined on Sj is shown cross-hatched in Fig. 4.2.
A«B
S
A B Y
x1 x2
X
y4
y3
x1 X
y2
y1
Fig. 4.3 The joint event A « B defined on S and corresponding joint event defined on Sj
The joint event A « B defined on the S corresponds to the joint event {x1 £ X £ x2 and y1 £ Y £ y2 or y3 £
Y £ y4} defined on Sj. This joint event is shown cross-hatched in Fig. 4.3.
4.3 For the pair of random variables, sketch the region of the plane corresponding to the following
events: (a) {X + Y > 2} (b) X/Y < 2.
Solution
Y
2
Y>2–X
1
X
1 2
X+Y=2
Fig. 4.4 The region of the plane corresponding to the event X + Y > 2
4.4 Probability Theory and Random Processes
Fig. 4.6 The region of the plane for the event A « B « C, where A, B and C are defined in Solved Problem 4.4
Multiple Random Variables 4.5
Practice Problem
4.1 Two events A and B defined on the sample spaces S are related to a joint sample space through random variables X
and Y and are defined by A = {x1 £ X £ x2} and B = {y1 £ Y £ y2). Draw the sketch of two sample space. Make a sketch of
two sample spaces showing areas corresponding to both events and the event A « B.
3. Â Â pX ,Y ( x, y) = FX ,Y (a, b) (4.6)
x£a y£b
4. If X and Y are independent random variables,
pX,Y(x, y) = pX(x) pY(y) (4.7)
5. If X has N possible values x1, x2 …, xN and Y has M possible values y1, y2, …, yM, then
N M
FX,Y = Â Â P( xn , ym ) u( x - xn ) u( y - ym ) (4.8)
n =1 m =1
4.6 Probability Theory and Random Processes
Solved Problems
4.5 The joint probability distribution of two random variables X and Y is represented by a joint probability
matrix given by
Y 1 2 3 4
X
1 È 0.1 0 0.2 0 ˘
P(X, Y) = 2 Í 0 0.1 0 0˙
˙
Í
3 Í0.2 0 0.3 0 ˙
Í ˙
4Î 0 0 0 0.1˚
Find the marginal distribution of X and Y. Find P(X £ 2, Y £ 4).
Multiple Random Variables 4.7
4.6 A fair coin is tossed four times. Let X denote the number of heads obtained in the first two tosses, and
let Y denote the number of heads obtained in the last two tosses.
Find the joint pmf of X and Y. Show that X and Y are independent random variables.
Solution The sample space and the values of X and Y are shown below.
S X Y
TTTT 0 0
TTTH 0 1
TTHT 0 1
TTHH 0 2
THTT 1 0
THTH 1 1
THHT 1 1
THHH 1 2
HTTT 1 0
HTTH 1 1
HTHT 1 1
HTHH 1 2
HHTT 2 0
HHTH 2 1
HHHT 2 1
HHHH 2 2
4.8 Probability Theory and Random Processes
1
pX, Y(0, 0) = P(X = 0, Y = 0) = P(TTTT) =
16
1 1 1
pX, Y(0, 1) = P(X = 0, Y = 1) = P(TTTH) » P(TTHT) = + =
16 16 8
1
pX, Y(0, 2) = P(X = 0, Y = 2) = P(TTHH) =
16
1 1 1
pX, Y(1, 0) = P(X = 1, Y = 0) = P(THTT) + P(HTTT) = + =
16 16 8
pX, Y(1, 1) = P(X = 1, Y = 1) = P(THTH) + P(THHT) + P(HTTH) + P(HTHT)
1 1 1 1 1
= + + + =
16 16 16 16 4
pX,Y(1, 2) = P(X = 1, Y = 2)
1 1 1
= P(HTHH) + P(THHH) = + =
16 16 8
1
pX,Y(2, 0) = P(X = 2, Y = 0) = P(HHTT) =
16
1 1 1
pX,Y (2, 1) = P(X = 2, Y = 1) = P(HHTH) + P(HHHT) = + =
16 16 8
1
pX,Y(2, 2) = P(X = 2, Y = 2) = (HHHH) =
16
The joint pmf of X and Y is tabulated below.
Ê 1ˆ Ê 1ˆ 1
pX(0) pY(1) = Á ˜ Á ˜ = = pX , Y (0, 1)
Ë 4¯ Ë 2¯ 8
Ê 1ˆ Ê 1ˆ 1
pX(0) pY(2) = Á ˜ Á ˜ = = pX , Y (0, 2)
Ë 4 ¯ Ë 4 ¯ 16
Ê 1ˆ Ê 1ˆ 1
pX(1) pY(0) = Á ˜ Á ˜ = = pX , Y (1, 0)
Ë 2¯ Ë 4¯ 8
Ê 1ˆ Ê 1ˆ 1
pX(1) pY(2) = Á ˜ Á ˜ = = pX , Y (1, 1)
Ë 2¯ Ë 2¯ 4
Ê 1ˆ Ê 1ˆ 1
pX(1) pY(2) = Á ˜ Á ˜ = = pX , Y (1, 2)
Ë 2¯ Ë 4¯ 8
Ê 1ˆ Ê 1ˆ 1
pX(2)pY(2) = Á ˜ Á ˜ = = pX , Y (2, 2)
Ë 4 ¯ Ë 4 ¯ 16
For all values of X and Y, the joint pmf satisfies
pX,Y(x, y) = pX(x) pY(y)
Therefore, X and Y are statistically independent.
Solution
ÂÂ pX ,Y ( xi , y j ) = 1
xi y j
2 3 2
fi k   ( xi + yi ) = k  ( xi + 1) + ( x1 + 2) + ( x1 + 3)
xi = 1 y j = 1 xi = 1
2
= k  (3 xi + 6) = k {(3 + 6) + (6 + 6)}
xi = 1
1
21 k = 1 fi k =
21
3 3
pX(xi) = Â pX ,Y ( xi , y j ) = Â pX ,Y ( xi , y j ) = Â k ( xi + yi )
yj yi = 1 y j =1
3
1 1
=
21
 ( xi + y j ) = 21 ÈÎ( xi + 1) + xi + 2) + ( xi + 3)˘˚
y j =1
4.10 Probability Theory and Random Processes
3 xi + 6
= xi = 1, 2, 3
21
2
1
pY(yj) = Â pX ,Y ( xi , y j ) = 21 Â ( xi + y j )
xi xi = 1
(1 + y j ) + (2 + y j ) 3 + 2 yi
= = ; y j = 1, 2
21 21
Practice Problems
Solved Problems
3 3 3
  k xi y2j = k  ( xi + 4 xi + 9 xi )
xi = 1 y j = 1 xi = 1
3
= k  (14 xi ) = k [14 + 28 + 42] = 84 k
xi = 1
1
84 k = 1 fi k =
84
Multiple Random Variables 4.11
1 1
= [ xi + 4 xi + 9 xi ] = xi ; xi = 1, 2, 3
84 6
The marginal pmf of Y is
pY(yj) = Â pX ,Y ( xi , y j )
xi
1 3 y 2j
=
84
 xi y2j = 14 ; yi = 1, 2, 3
xi = 1
4.9 The joint pmf of (X, Y) is given by FX,Y(x, y) = k(2x + 3y); x = 0, 1, 2; y = 1, 2, 3. Find all marginal
probability distributions. Find (P(X + Y > 3).
Solution The joint probability distribution is represented by the probability matrix as shown below:
For x = 0; FX,Y(0, y) = 3yk for y = 1, 2, 3
fi FX,Y(0, 1) = 3k; FX,Y(0, 2) = 6 k
FX,Y(0, 3) = 9 k.
Similarly,
For x = 1; FX,Y(1, y) = k(2 + 3y); y = 1, 2, 3
FX,Y(1, 1) = 5 k; FX,Y(1, 2) = 8 k; FX,Y(1, 3) = 11 k
For x = 2; FX,Y (2, y) = k(4 + 3y); y = 1, 2, 3
FX,Y(2, 1) = 7 k; FX,Y(2, 2) = 10 k
FX,Y(2, 3) = 13 k
We know
ÂÂ p( xi , y j ) = 1
i j
Y
X 1 2 3
3 6 9 Sum of row 1
0 — — — 18
72 72 72 P(X = 0) = —
72
5 8 11 Sum of row 2
1 — — — 24
72 72 72 P(X = 1) = —
72
7 10 13 Sum of row 3
2 — — — 30
72 72 72 P(X = 2) = —
72
Sum of Sum of Sum of
column 1 column 2 column 3
P(Y = 1) P(Y = 2) P(Y = 3) 1
15 24 33
=— =— =—
72 72 72
4.10 The joint space for two random variables X and Y and corresponding probabilities are shown in the
table.
(X, Y) (1, 1) (2, 2) (3, 3) (4, 4)
p 0.2 0.3 0.35 0.15
Find
(a) FX,Y(x, y)
(b) Marginal distribution functions of X and Y.
(c) P(X £ 2, Y £ 2)
(d) P(1 < X £ 3, Y ≥ 3)
Solution
(a) FX, Y(x, y) = P(X £ x, Y £ y) = Â Â P( x = xi , y = y j )
xi £ x yi £ y
= Â Â P( x = xi , y = y j )
xi £ x yi £ y
Multiple Random Variables 4.13
4
P(X = 1) = pX(1) = Â pX ,Y (1, y j )
y j =1
4
P(Y = 2) = pY(2) = Â pX , Y ( xi , 2)
xi = 1
4.11 Two discrete random variables X and Y have joint pmf given by the following table:
Y 1 2 3
X È1 1 1˘
1Í
12 6 12 ˙
Í ˙
X 2 ÍÍ
1 1 1˙
6 4 12 ˙
Í ˙
Í 1 1 ˙
3Í 0
Î 12 12 ˙˚
Compute the probability of each of the following events:
(a) X £ 1.5 (b) XY even (c) Y is even given that X is even.
Solution
(a) P(X £ 1.5) = P(X = 1)
= P(X = 1, Y = 1) + P(X = 1, Y = 2) + P(X = 1, Y = 3)
1 1 1 1
= + + =
12 6 12 3
Multiple Random Variables 4.15
Practice Problem
4.4 Find the marginal pmf’s for the pair of random variables whose joint pmf is
Ê 1 2ˆ
Find P(X = Y) and P(X ≥ Y). ÁË Ans. 2 , 3 ˜¯
Solved Problems
4.12 In an experiment, 3 balls are randomly selected from an urn containing 3 red, 3 white, and 4 blue
balls. If X denotes the number of red balls chosen and Y denotes the number of white balls chosen then find
the joint pmf of X and Y. Also find marginal pmf of X and Y.
Solution Let i and j denote the number of red and white balls chosen respectively. Then we can write
p(i, j) = P(X = i, Y = j)
The total number of balls in the urn is 10. The number of possible ways that 3 balls can be selected from
Ê 10ˆ
the urn is Á ˜ . Now we will find the probability p(i, j) of different events by varying the values of i, j. Let
Ë 3¯
us find p(0, 0) = P(X = 0, Y = 0). In this event, no red and white balls are chosen. Therefore, 3 balls are chosen
from 4 blue balls.
4.16 Probability Theory and Random Processes
Hence,
Ê 4ˆ Ê 10ˆ
P(0, 0) = P(X = 0, Y = 0) = Á ˜ ÁË 3 ˜¯ = 0.0333
Ë 3¯
Let i = 0 and j = 1. In this event, only one white ball is selected. Since a red ball is not chosen, the
remaining balls must be chosen from blue balls. Hence, we can write
Ê 3ˆ Ê 4ˆ
ÁË 1˜¯ ÁË 2˜¯
P(0, 1) = P(X = 0, Y = 1) = = 0.15
Ê 10ˆ
ÁË 3 ˜¯
Similarly,
Ê 3 ˆ Ê 4ˆ
ÁË 2˜¯ ÁË 1˜¯
p(0, 2) = = 0.1
Ê 10ˆ
ÁË 3 ˜¯
Ê 3ˆ Ê 10ˆ 1
p(0, 3) = Á ˜
Ë 3¯ ÁË 3 ˜¯ = 120 = 0.00833
Ê 3ˆ Ê 4ˆ
ÁË 1 ˜¯ ÁË 2˜¯ 3(6)
p(1, 0) = = = 0.15
Ê 10ˆ 120
ÁË 3 ˜¯
Ê 3ˆ Ê 3ˆ Ê 4ˆ Ê 10ˆ 36
p(1, 1) = Á ˜
Ë1¯ ÁË 1 ˜¯ ÁË 1 ˜¯ ÁË 3 ˜¯ = 120 = 0.3
p(1, 2) = Ê ˆ Ê ˆ Ê 10ˆ
3 3 9
ÁË 1 ˜¯ ÁË 2˜¯ ÁË 3 ˜¯ = 120 = 0.075
Ê 3 ˆ Ê 4ˆ Ê 10ˆ 3(4)
p(2, 0) = Á ˜ Á ˜
Ë 2¯ Ë 1 ¯ ÁË 3 ˜¯ = 120 = 0.1
Ê 3ˆ Ê 3ˆ
ÁË 2˜¯ ÁË 1 ˜¯ 3(3)
p(2, 1) = = = 0.075
Ê 10ˆ 120
ÁË 3 ˜¯
p(3, 0) = Ê ˆ Ê 10ˆ
3
ÁË 3˜¯ ÁË 3 ˜¯ = 0.00833
Multiple Random Variables 4.17
Y 0 1 2 3
S p (x , y )
j
i j
X
0 0.0333 0.15 0.1 0.00833 0.29163 PX(X = 0)
1 0.15 0.3 0.075 0 0.525 PX(X = 1)
2 0.1 0.075 0 0 0.175 PX(X = 2)
3 0.00833 0 0 0 0.00833 PX(X = 3)
S p (x , y )
i
i j 0.29163 0.525 0.175 0.00833 1
PY(Y = 0) PY(Y = 1) PY(Y = 2) PY(Y = 3)
4.13 In a colony, 10% of the families have no children, 30% have one child, 40% have two children, and
20% have three children. In a family, B represents the number of boys and G represents the number of girls.
If a family is chosen at random from the colony, find the pmf of the family. Assume that in each family each
child is equally likely to be a boy or a girl.
3
Ê 1ˆ
= 0.2 Á ˜ = 0.025
Ë 2¯
P(B = 1, G = 0) = 0.15, P(B = 2, G = 0) = 0.1 and P(B = 3, G = 0) = 0.025
P(B = 1, G = 1) = P(1 girl and 1 boy and total of 2 children)
= P(2 children) P(1 boy and 1 girl)
Ê 1ˆ
= (0.4) Á ˜ = 0.2
Ë 2¯
P(B = 1, G = 2) = P(1 boy and 2 girl and total of three children)
= P(3 children) P(1 boy and 2 girls)
ÈÊ 3ˆ Ê 1 ˆ Ê 1 ˆ
1 2˘
= (0.2) ÍÁ ˜ Á ˜ Á ˜ ˙
ÍÎË 1¯ Ë 2 ¯ Ë 2 ¯ ˙˚
Ê 3ˆ
= 0.2 Á ˜ = 0.075
Ë 8¯
Similarly, P(B = 2, G = 1) = 0.075
The joint pmf is shown in the table.
Girls
P(B = i, G = j)
4.14 The joint distribution function of two discrete random variables X and Y is given by
Ï1
Ô4 x = 1, y = 1
Ô
Ô3 x = 2, y = 2
Ô
FX,Y(x, y) = Ì 8
Ô3
Ô x = 2, y = 1
Ô4
ÔÓ 1 x = 2, y = 2
Determine the
(a) joint pmf of X and Y; (b) marginal pmf of X (c) marginal pmf of Y
Multiple Random Variables 4.19
Solution
(a) We know
FX,Y(a, b) = Â Â p X , Y ( x, y )
x£a y£b
1
FX,Y(1, 1) = pX,Y(1, 1) =
4
3
FX,Y(1, 2) = pX,Y(1, 1) + pX,Y(1,2) =
8
3 1 1
fi pX,Y(1, 2) = - =
8 4 8
3
FX,Y(2, 1) = pX,Y(1, 1) + pX,Y(2, 1) =
4
3 1 1
fi pX,Y(2, 1) = - =
4 4 2
FX,Y 2, 2) = pX,Y(1, 1) + pX,Y(1, 2) + pX,Y(2, 1) + pX,Y(2, 2) = 1
Ê 1 1 1ˆ 1
pX,Y(2, 2) = 1 – Á - = ˜ =
Ë 4 8 2¯ 8
1
The joint pmf pX,Y (x, y) = ; x = 1, y = 2
4
1
= ; x = 1, y = 2
8
1
= ; x = 2, y = 1
2
1
= ; x = 2, y = 2
8
The joint pmf is shown in the table.
Y
X 1 2
1 1
1 — —
4 8
1 1
2 — —
2 8
1 1 1
PY (Y = 2) = + =
8 8 4
Practice Problem
Solved Problems
4.15 The joint space for two random variables X and Y and corresponding probabilities are shown in the
table. Find and plot (a) FX,Y(x, y), (b) marginal distribution functions of X and Y. (c) Find P(0.5 < X < 1.5)
(d) Find P(X £1, Y £ 2) and (e) Find P(1 < X £ 2, Y £ 3).
X, Y 1,1 2,2 3,3 4,4
P 0.05 0.35 0.45 0.15
Solution
(a) The joint distribution function is given by
FX,Y(x, y) = P(X £ x, Y £ y)
= Â Â P( X = xi , Y = yi )
xi £ x y j £ y
FX,Y(1, 1) = P(X £ 1, Y £ 1)
= P(X = 1, Y = 1) = 0.05
FX,Y(2, 2) = P(X £ 2, Y £ 2)
= P(X = 1, Y = 1) + P(X = 2, Y = 2)
= 0.05 + 0.35 = 0.40
FX,Y(3, 3) = P(X £ 3, Y £ 3)
= P(X = 1, Y = 1) + P(X = 2, Y = 2) + P(X = 3, Y = 3)
= 0.05 + 0.35 + 0.45 = 0.85
FX,Y(4,4) = P(X £ 4, Y £ 4)
= P(X = 1, Y = 1) + P(X = 2, Y = 2) + P(X = 3, Y = 3) +P(X = 4, Y = 4)
= 0.85 + 0.15 = 1.00
Multiple Random Variables 4.21
FX,Y(x, y)
Y
0.15 1.0
0
5
0.45 0.8
4
5
3
0.35
2
0.4
1
0
0.0
1 5
2
3
4
5
Fig. 4.7 Joint distribution function of (X, Y) given in Solved Problem 4.15
The joint distribution function can be constructed from the above values. First, we can find that for
x < 1 and/or y < 1, FX,Y = 0. At the point (1, 1), a step function of amplitude 0.05 exists. This value
is maintained for x ≥ 1 and y ≥ 1. A step amplitude of 0.35 is added at the point (2, 2) which results
FX,Y (x, y) = 0.40 for x ≥ 2 and y ≥ 2. This value holds until the point (3, 3) where another step signal
of amplitude 0.45 is added. From this point, FX, Y (x, y) = 0.85. Finally, a fourth stair of amplitude
0.15 is added. The resulting joint distribution function FX,Y (x, y) is shown in Fig. 4.7.
The expression for joint distribution function is
FX,Y (x, y) = 0.05 u(x – 1) u(y – 1) + 0.35 u(x – 2) u(y – 2)
+ 0.45 u(x – 3 u(y – 3) + 0.15 u(x – 4) u(y – 4)
The joint distribution is shown in the table below.
Y 1 2 3 4 PX(i)
X
1 0.05 0 0 0 0.05
2 0 0.35 0 0 0.35
3 0 0 0.45 0.45
4 0 0 0 0.15 0.15
PY(y) 0.05 0.35 0.45 0.15
(b) The plot for marginal distribution functions of X and Y are shown in Fig. 4.8.
FX(x) Fy(y)
1.00 1.00
0.85 0.85
0.40 0.40
0.05 0.05
0 1 2 3 4 x 0 1 2 3 4 Y
Fig. 4.8 (a) Marginal distribution function of X. (b) Marginal distribution function of Y.
4.22 Probability Theory and Random Processes
= 0 + 0.35 + 0 = 0.35
FX,Y(x, y)
1.0
0
0.88
Y
0.42 0 0.6
.4 0.7
7 7
0.2 7
5
0.3
–6
0.1 56
–5 4 0.6
3 0
–4
–3 2
–2
–1 1
0.33
0 0.4
0.1
1
5 5
–1
2
–2
3
4
–3
5
–4
6
7
0.2
–5
7
–6
1.00
0.88 1.00
0.65 0.77
0.47 0.6
0.42 0.45
0.25 0.27
0.1 0.15
–4 –3 –2 –1 0 1 2 3 4 –5 –4 –3 –2 –1 0 1 2 3 4
(a) (b)
Fig. 4.10 (a) Marginal distribution function of X, (b) Marginal distribution function of Y
Practice Problems
(a) Find the probability distribution of Y. (b) Are X and Y are independent?
Ê 1 1 ˆ
ÁË Ans : fY ( y) = 0.5 u( y - 1) + 6 u( y - 2) + 3 u( y - 3˜¯
REVIEW QUESTIONS
3. Define two-dimensional discrete random variable.
4. Define joint probability mass function.
5. Define marginal pmf of random variables X and Y.
6. Explain in detail about the properties of joint pmf.
7. Define joint distribution of continuous random variables.
8. Explain the properties of joint distribution.
9. Distinguish between joint distribution and marginal distribution.
Solved Problems
4.17 The joint distribution function for two random variables X and Y is
FX,Y(x, y) = u(x) u(y) (1 – e–ax – e–ay + e–a(x – y)).
Sketch FX,Y(x, y). Assuming a = 0.6, find P(X £ 1, Y £ 1), P(1 < X £ 2), P(–1 < X £ 2, 1 < Y £ 2).
Solution The joint distribution function is shown in Fig. 4.11 Given a = 0.6,
FX, Y(x, y)
ay
2.5
2.0
1.5
1.0
0.5
0.5
1.0
1.5
2.0
2.5
ax
Fig. 4.11 Joint distribution function of (X, Y).
Multiple Random Variables 4.27
P(X £ 1, Y £ 1) = FX,Y(1, 1)
= (1 – e–a – e–a + e–2a) = (1 – e–0.6 – e–0.6 + e–1.2) = 0.2035
P(1 < X £ 2) = FX(2) – FX(1)
= FX,Y(2, •) – FX,Y(1, •)
= (i – e–2(1.6)) – (1 – e–0.6)
= 0.2476
P(–1 < X £ 2, 1 < Y £ 2) = FX,Y(2, 2) + FX,Y(–1, 1) – FX,Y(2, 1) – FX,Y(–1, 2)
Since FX,Y(x, y) = for x < 0 and y < 0,
FX,Y(–1, 1) = 0 and FX,Y(–1, 2) = 0
FX,Y(2, 2) = 1 – e–1.2 – e–1.2 + e–2.4 = 0.4883
FX,Y(2, 1) = 1 – e–1.2 – e–0.6 + e–1.8 = 0.3153
fi P(–1 < X £ 2, 1 < Y £ 2) = 0.4883 – 0.3153 = 0.173
Practice Problems
Ê Ï 1 ˆ
Á Ans : (a)F ( x ) = Ô1 - 2 , x >1 45 1 ˜
Á Ì x ; (b) ,
64 36 ˜˜
X
ËÁ Ô 0 otherwise ¯
Ó
Solved Problems
4.18 The joint CDF of a two-dimensional random variable is FX,Y(x, y). Show that P(X > a, Y > c) =
1 – FX(a) – FY(c). + FX,Y(a, c).
Solution
P(X > a, Y > c) = P(a < X < •, c < Y < •)
= P(a < X £ •, Y £ •) – P(a < X £ •, Y £ c)
= FX,Y(•, •) – FX,Y(a, •)) – (FX,Y(•, c) – FX,Y(a, c))
= FX,Y(•, •) – FX,Y(a, •) – FX,Y(•, c) + FX,Y(a, c)
4.28 Probability Theory and Random Processes
We have
FX,Y(•, •) = 1
FX,Y(a, •) = FX(a)
FX,Y(•, c) = FX(c)
fi P(X > a, Y > c) = 1 – FX(a) – FX(c) + FX,Y(a, c)
4.19 Let FX(x) and FY(y) be valid one-dimensional CDFs. Show that FX,Y(x, y) = FX(x) FY(y) satisfies the
properties of a two-dimensional CDF.
Solution
(a) FX,Y is a nondecreasing function of both x and y.
If x1 £ x2 and y1 £ y2 then
FX,Y(x1, y1) £ FX,Y(x2, y2)
FX,Y(x1, y1) = FX(x1) FY(y1) and FX,Y(x2, y2) = FX(x2) FY(y2)
Since FX(x1) £ FX(x2) and FY(y1) £ FY(y2), therefore,
FX(x1) FY(y1) £ FX(x2) FY(y2)
(b) FX, Y (x1, – •) = 0
FX, Y (x1, –•) = FX(x) FY(–•) = 0 since FY(–•) = 0
FX, Y (–•, y1) = 0
FX, Y (–•, y1) = FX(–•) FY(y1)
= 0 since FX(–•) = 0
FX,Y (•, •) = 1
FX,Y(•) = FX(•) FY (•) = 1(1) = 1
(c) The marginal CDFs are
FX(x) = FX,Y(x, •)
FX,Y(x, •) = FX(x) FY(•) = FX(x)
FY(y) = FX,Y(•, y)
FX,Y (•, y) = FX(•) FY(y) = FY (y)
lim FX ,Y ( x, y) = lim FX ( x ) FY ( y) = FX (a ) FY ( y) = FX ,Y (a, y)
x Æ a+ x Æ a+
GX,Y(x1,y1) = 1 - e - x1 - y1
GX,Y(x1, y2) = 1 - e - x1 - y2
GX,Y(x2, y1) = 1 - e - x2 - y1
fi P(x1 < X £ x2, y1 < X £ y2)
= (1 - e - x2 - y2 ) + (1 - e - x1 - y1 ) - (1 - e - x1 - y2 ) - (1 - e - x2 - y1 )
4.30 Probability Theory and Random Processes
-x - y -x - y -x - y -x - y
= e 1 2 +e 2 1 -e 2 2 -e 1 1
= e - x1 e - y2 + e - x2 e - y1 - e - x2 e - y2 - e - x1 e - y1
= e - x1 (e - y2 - e - y1 ) - e - x2 (e - y2 - e - y1 )
= (e - x1 - e x2 ) (e - y2 - e - y1 )
Solution Given:
GX ,Y ( x, y) = 0 for x < y
= 1 for x ≥ y
Since the function is zero for x < y, we can write
GX,Y(–•, –•) = 0; GX,Y(–•, y) = 0
But GX,Y(x, – •) = 1 for all values of x except x π –•.
Therefore, GX,Y(x, y) is not a valid distribution function. Also, consider a point such that
x1 < y1 £ x2 < y2, then
P(x1 < X £ x2, y < Y £ y2)
= GX,Y(x1, y1) + GX,Y(x2, y2) – GX,Y(x2, y1) – GX,Y(x1, y2)
Now,
GX,Y(x1, y1) = 0 since x1 < y1
GX,Y (x2, y2) = 0 since x2 < y2
GX,Y (x1, y2) = 0 since x1 < y2
GX,Y (x2, y1) = 1 since x2 > y1
Therefore,
P(x1 < X £ x1, y1 < Y £ y2) = 0 + 0 – 1 – 0 = –1
which is less than zero and violates the property of distribution function. Hence, the given GX,Y(x, y) is not a
valid joint pdf.
Multiple Random Variables 4.31
Practice Problems
∂ ∂
FX ,Y ( x, y + Dy) FX ,Y ( x, y)
= lim ∂ x - lim x ∂
Dy Æ 0 Dy Dy Æ 0 Dy
∂2
= F ( x, y ) (4.25)
∂x ∂y X ,Y
4.32 Probability Theory and Random Processes
That is, the joint pdf of fX,Y(x, y) can be obtained from the joint CDF FX,Y(x, y) by taking a partial derivative
with respect to each variable.
The joint pdf of two random variables X and Y is defined as the second derivative of the joint distribution
function whenever it exists.
∂2 FX ,Y ( x, y)
fX,Y(x, y) = (4.26)
∂x ∂y
The joint CDF can be obtained in terms of joint pdf using the equation
y x
FX,Y(x, y) = Ú Ú f X ,Y (u, v) du dv (4.27)
-• -•
The joint pdf has the following properties:
1. For all x and y, fX,Y(x, y) ≥ 0 (4.28)
• •
2. Ú Ú f X ,Y ( x, y) dx dy = 1 (4.29)
-• -•
Using Eq. (4.27) and together with the fact FX,Y(•, •) = 1, we can prove the above equation.
y x
3. FX,Y (x, y) = Ú Ú f X ,Y (u, v) du dv (4.30)
-• -•
x •
4. FX (x) = Ú Ú f X ,Y (u, v) du dv (4.31)
-• -•
y •
5. FY (y) = Ú Ú f X ,Y (u, v) du dv (4.32)
-• - •
•
6. fX (x) = Ú f X ,Y ( x, y) dy (4.33)
-•
• •
fX(x) = Ú f X ,Y (u, v) dv = Ú f X ,Y ( x, y) dy (4.34)
-• -•
•
7. fY(y) = Ú f X ,Y ( x, y) dx (4.35)
-•
8. fX,Y(x, y) is continuous for all except possibly finite values of x and y.
Multiple Random Variables 4.33
Ï1 for x = 0
d ( x) = Ì
Ó0 for x π 0
It is also known as unit impulse sequence
The shifted unit impulse sequence is defined as
Ï1 for x = a
d ( x - a) = Ì
Ó0 for x π a
If we differentiate unit step function we get an impulse.
REVIEW QUESTIONS
10. Define joint pdf of two random variables X and Y.
11. Explain the properties of joint pdf.
12. Define pmf for N random variables.
4.34 Probability Theory and Random Processes
Solved Problems
Solution Given:
6
fX, Y (x, y) = ( x + y 2 ) for 0 £ x £ 1, 0 £ y £ 1
5
=0 otherwise
The marginal pdf of X is
•
fX(x) = Ú f X ,Y ( x, y) dy
-•
1 È 1 ˘
6 6 y3
= Ú ( x + y 2 ) dy = Í x + ˙
50 5Í 3 ˙
Î 0˚
6 Ê 1ˆ
=
5 ÁË x + 3 ˜¯ 0 £ x £ 1
ÈÊ 3 3/4 ˘
=
6 Í y + yˆ ˙
5 ÍÁË 3 2 ˜¯ ˙
ÎÍ ˙
1/4 ˚
6 Ï 1 Ê 27 1 ˆ 1 Ê 3 1 ˆ ¸
= Ì Á - ˜ + Á - ˜˝
5 Ó 3 Ë 64 64 ¯ 2 Ë 4 4 ¯ ˛
Multiple Random Variables 4.35
6 Ï 1 Ê 26 ˆ 1 Ê 1 ˆ ¸
= Ì Á ˜ + Á ˜˝
5 Ó 3 Ë 64 ¯ 2 Ë 2 ¯ ˛
= 0.4625
4.23 Find K if the joint probability density function of the bivariate random variable (X, Y) is given by
Ï K (1 - x )(1 - y), if 0 < x < 1, 0 < y < 1
fX,Y(x, y) = Ì
Ó0 otherwise
Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
1 1
Ú Ú k (1 - x)(1 - y) dx dy = 1
0 0
1 1
k Ú (1 - x ) dx Ú (1 - y) dy = 1
0 0
ÈÊ 1˘ ÈÊ 1˘
x2 ˆ ˙Í y- y ˆ
2
k ÍÁ x - ˜ Á
˙ =1
ÍË 2¯ ˙ ÍË 2 ˜¯ ˙
ÎÍ ˙ ÎÍ
0˚ ˙
0˚
Ê 1ˆ Ê 1ˆ
k Á ˜ Á ˜ =1
Ë 2¯ Ë 2¯
k=4
4.24 Joint probabilities of two random variables X and Y are given in the table:
Y 1 2
X
1 0.2 0.15
2 0.1 0.2
3 0.2 0.15
(a) Find out joint and marginal distribution functions.
(b) Plot joint and marginal density functions.
Solution
FX,Y(1, 1) = P(X £ 1, Y £ 1) = P(X = 1, Y = 1) = 0.2
FX,Y(2, 1) = P(X £ 2, Y £ 1) = P(X = 1, Y = 1) + P(X = 2, Y = 1) = 0.2 + 0.1 = 0.3
FX,Y(3, 1) = P(X £ 3, Y £ 1) = P(X = 1, Y = 1) + P(X = 2, Y = 1) + P(X = 3, Y = 1)
= 0.2 + 0.1 + 0.2 = 0.5
FX,Y(1, 2) = P(X £ 1, Y £ 2) = P(X = 1, Y = 1) + P(X = 1, Y = 2) = 0.2 + 0.15 = 0.35
FX,Y(2, 2) = P(X £ 2, Y £ 2)
= P(X = 1, Y = 1) + P(X = 1, Y = 2) + P(X = 2, Y = 1) + P(X = 2, Y = 2)
4.36 Probability Theory and Random Processes
Y 1.0
0
0.6
3 5
0.3
2 5
1 0.2
0.2 0.3
0 0.5
1
2
3
(a)
Multiple Random Variables 4.37
FX(x) FY(y)
1.0 1.0
0.65
0.5
0.35
0 0
0 1 2 3 x 1 2 y
(b) (c)
Fig. 4.12 (a) Joint distribution of (X, Y) (b) Marginal distribution function of X (c) Marginal distribution function of Y
From Eq. (4.42), we can observe that for x < 1, Fx(x) = 0. Only at the point x = 1, a step amplitude of 0.35
is added. This value is maintained for all x ≥ 1 and at x = 2, a step amplitude of 0.2 adds to the function which
results in a step amplitude of 0.65. At x = 3, a step function of 0.35 is added. The resultant Fx(x) is shown in
Fig. 4.12(b)
From Eq. (4.43), we find that FY(y) has two step functions of amplitude 0.5 at y = 1 and y = 2. This results
in FY(y) shown in Fig. 4.12(c).
The joint pdf of the random variables X and Y can be obtained by using
∂2 FX ,Y ( x, y)
fX,Y(x, y) =
∂x ∂y
We know if we differentiate a unit step function, we get an impulse. Therefore, fX,Y(x, y) can be written as
fX,Y(x, y) = 0.2 d (x – 1) d (y – 1) + 0.1 d (x – 2) d (y – 1) + 0.2 d (x – 3) d (y – 1) + 0.15 d (x – 1) d (y – 2) + 0.2
d (x – 2) d (y – 2) + 0.15 d (x – 3) d (y – 2)
Similarly, by differentiating FX(x) and FY(y), we get,
fX(x) = 0.35 d (x – 1) + 0.3 d (x – 2) + 0.35 d (x – 3) and
fY(y) = 0.5 d (y – 1) + 0.5 d (y – 2).
The plots of marginal density functions and joint density functions are shown in Fig. 4.13.
4.38 Probability Theory and Random Processes
(c)
Fig. 4.13 (a) Marginal density function of X, (b) Marginal density function of Y, (c) Joint density function of (X, Y)
Practice Problems
Solved Problems
4.25 A random vector (X, Y) is uniformly distributed (fX,Y(x, y) = k) in the region shown in Fig. 4.14 and
zero elsewhere. (a) Find the value of k. (b) Find the marginal pdfs of X and Y.
y
(0, 1)
(1, 0)
x
Fig. 4.14
Multiple Random Variables 4.39
Solution
(0, 1)
x+y=1
(1, 0)
Fig. 4.15
We know that
• •
Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
1 1- y
fi Ú Ú k dx dy = 1
0 0
1
1
Ê y2 ˆ Ê 1ˆ
= k Ú (1 - y) dy = k Á y - ˜ = kÁ ˜ =1
Ë 2¯ Ë 2¯
0 0
fi k =2
•
fX(x) = Ú f X ,Y ( x, y) dy
-•
1- x
fi Ú 2 dy = 2(1 - x ); 0 < x < 1
0
and
1- y
fY(y) = Ú 2 dx = 2(1 - y); 0 < y < 1
0
2 2
fi   k ( x + 2 y) = 1
x =1 y =1
2
= k  ( x + 2) + ( x + 4) = 1
x =1
2
fi k  (2 x + 6) = 1
x =1
1
k(8 + 10) = 1 fi k =
18
(b) The marginal pmf of X is given by
pX(x) = Â p X , Y ( x, y )
y
1 2
1 1 x+3
=
18
 ( x + 2 y) = 18 [ x + 2 + x + 4] = 18 (2 x + 6) = 9
y =1
1
pX(x) = ( x + 3); x = 1, 2
9
2
1
pY(y) = Â pX ,Y ( x, y) = 18 Â ( x + 2 y)
x x =1
(2 y + 1) + (2 + 2 y) 4 y + 3
= =
18 18
1
Py(y) = [4 y + 3] y = 1, 2
18
Since pXY(x, y) π pX(x) pY(y), X and Y are not independent.
0 0.1 0.05
2 0.25 0.3
4 0.12 0.18
fX,Y(x, y)
Y
5
0.3
4
0.05 3
0.1 0.25
2
1 0.18
1
0.12
2
3
4
5
FX, Y(x, y)
0.7 1.0
0
0.4
0.1 4
5 3
2 0.5
0.1 1 2
0
1
2
3
4 0.2
2
Fig. 4.18
Solution
(a) Given: FXY(x, y) = (1 – e–ax) (1 – e–by)
The marginal CDF of FX(x) = FX,Y(x, •)
FX(x) = (1 – e–ax) (1 – e–b(•)) = 1 – e–ax
Similarly,
FY(y) = FXY(•, y)
= (1 – e–a(•)) (1 – e–by)
= 1 – e–by
FX(x) = (1 – e–ax) for x ≥ 0
= 0 for x < 0
FY(y) = (1 – e–by) for y ≥ 0
= 0 for y < 0
(b) It can be easily observed that
FX, Y(x, y) = FX(x) FY(y)
Therefore, X and Y are independent.
P(X £ 1, Y £ 1) = FX,Y(1, 1) = (1 – e–a) (1 – e–b)
P(X £ 1) = FX(1) = (1 – e–a)
4.44 Probability Theory and Random Processes
Practice Problems
Ê -x 1 ˆ
Find the marginal pdfs of X and Y.
Á Ans : f X ( x ) = e , fY ( y) = ˜
Ë (1 + y)2 ¯
Solved Problems
Fig. 4.19
Multiple Random Variables 4.45
• x
- ( ax + by )
P(X > Y) = Ú Ú abe dy dx
0 0
•
- ax
Ê x - by ˆ
= Ú ab e Á Ú e dy˜ dx
Ë0 ¯
0
•
ab - ax Ê - by x ˆ
= Ú ( - b) e ÁË e 0¯
˜ dx
0
•
= a Ú e - ax (1 - e - bx ) dx
0
È• • ˘
= a Í Ú e - ax dx - Ú e - ( a + b ) x dx ˙
ÍÎ 0 0 ˙˚
È - ax • •˘
e e-( a + b) x
= aÍ - ˙
Í -a -( a + b) ˙
Î 0 0 ˚
È 1 1 ˘
= a Í- (-1) + (-1)˙
Î a a+b ˚
È1 1 ˘ ab
= aÍ - ˙=
Î a a + b ˚ a( a + b)
b
=
a+b
Ú Ú f X ,Y ( x, y)dx dy = 1
-• -•
• • ••
fi Ú Ú k e - ( x + 2 y ) dx dy = k ÚÚe
- x -2 y
e dx dy
-• -• 0 0
4.46 Probability Theory and Random Processes
• •
•
-2 y -x
= - k Ú e (e ) dy = k Ú e -2 y dy
0
0 0
- k -2 y • k
= e =
2 0 2
k/2 = 1 fi k=2
• 1
-( x + 2 y)
P(X > 1, Y < 1) = Ú Ú2 e dy dx
1 0
• Ê1 ˆ
= 2 Ú e - x Á Ú e -2 y ) dy˜ dx
1 Ë0 ¯
•
- x Ê -2 y ˆ
1
= - Ú e Áe ˜ dx
Ë 0¯
1
•
-x -2
= - Ú e (e - 1) dx
1
•
-x
= 0.846 Úe dx
1
= 0.846 e–1 = 0.3178
• •
-( x + 2 y)
P(X < Y) = Ú Ú 2e dy dx
0 x
• •
-x -2y
= 2 Úe Úe dy dx
0 x
• • •
Ê •ˆ -1 -3 x
= - Ú e - x Á e -2 y -3x
˜ dx = Ú e dx = e
Ë x¯ 3 0
0 x Fig. 4.20
1
=
3
P(X £ 2) = ?
• •
-( x + 2 y) e -2 y
fX(x) =
Ú2e dy = 2e - x
0
-2
0
= –e–x(–1) = e–x
•
e- x
2
-x
P(X £ 2) = Úe dx =
-1
0 0
4.31 Find the joint density and both marginal density functions of the distribution function of Solved
Problem (4.17).
Solution
FX,Y(x, y) = u(x) u(y) (1 – e–ax – e–ay + e–a(x + y))
We know
∂2 FX ,Y ( x, y)
fX,Y(x, y) =
∂ x ∂y
∂2
= {u( x ) u( y) 1 - e - ax - e - ay + e - a ( x + y )}
∂ x ∂y
∂2
= {(1 - e - ax ) u( x ) (1 - e - ay ) u( y)}
∂ x ∂y
∂
= {(1 - e - ax ) u( x ) [(1 - e - ay ) d ( y) + u( y) ae - ay ]}
∂x
at y = 0; 1 – e–ay = 0, therefore d(y) = 1 at y = 0
∂ = 0 at y π 1
fX,Y(x, y) = {(1 - e - ax ) u( x )} u( y) a e - ay
∂x
= u(y) ae–ay {(1 – e–ax) d(x) + u(x) ae–ax}
At x = 0, (1 – e–ax) d(x) = 0
Therefore, fX,Y(x, y) = a2 e–a(x + y) u(x) u(y)
The marginal density function of X is given by
•
fX(x) = Úa
2
e - a ( x+y ) dy
0
•
e - ay
= a2e–ax = a e - ax ; x > 0
-a
0
•
2
fY(y) = Úa e - a ( x + y ) dx
0
•
2 - ay e - ax
= a e = a e - ax y > 0
-a
0
fX(x) = ae–ax x > 0
fY(y) = ae–ay y > 0
4.48 Probability Theory and Random Processes
Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
• • p 1
-2 x Ê yˆ
Ú Ú f X ,Y ( x, y) dx dy = Ú Úb e cos Á ˜ dx dy = 1
Ë 2¯
-• -• 0 0
p Ê 1ˆ
Ê y ˆ e -2 x
= bÚ cos Á ˜ Á ˜ dy
Ë 2 ¯ Á (-2) ˜
0 Ë 0¯
p
Ê yˆ
p sin Á ˜
b Ê yˆ b(1 - e -2 ) Ë 2¯
= (1 - e -2 ) Ú cos ÁË 2 ˜¯ dy =
2 0
2 Ê 1ˆ
ÁË 2 ˜¯
0
= b(1 – e–2)(1) = 1
1
fi b=
1 - e -2
Practice Problems
Solved Problems
4.33 Find the value of b for the function given below to be a valid pdf.
Ïb( x 2 + 4 y 2 ) 0 £ | x | < 1 and 0 £ y < 2
fX,Y(x, y) = ÔÌ
ÔÓ0 elsewhere
Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
2 1 2Ê 1 ˆ
x3 1
fi Ú Ú b( x + 4 y ) dx dy = b Ú ÁÁ 3 + 4 y2 x ˜ dy
2 2
-1 ˜
0 -1 0Ë -1 ¯
2
Ê2 ˆ È2 2
8 3 ˘
2
= b Ú Á + 8 y 2 ˜ dy = b Í y + y ˙
Ë3 ¯ ÍÎ 3 3 0˙
0 0 ˚
È 4 64 ˘ 68
= bÍ + ˙ = b
Î3 3˚ 3
68 3
For a valid pdf, b =1fi b =
3 68
CONDITIONAL DISTRIBUTION
AND DENSITY FUNCTIONS 4.8
The conditional distribution function of a random variable X, given some event A, is defined as
FX(x | A) = P(X £ x | A)
P( X £ x « A)
= (4.44)
P( A)
with P(A) π 0.
The conditional density function is given by
d
fX(x | A) = F ( x | A) (4.45)
dx X
Ú Ú f X ,Y (u, v) du dv
y - Dy - •
FX(x | y-Dy < Y £ y + Dy) = y + Dy (4.48)
Ú f X (v) dv
y - Dy
If X and Y are continuous random variables for some value of Dy, Eq. (4.48) can be written as
x Dy
Ú f X ,Y (u, y) du
Ê 2 Dy ˆ
-•
= ÁË 2 Dy ˜¯ (4.49)
fY ( y)
In the limit Dy Æ 0,
x
Ú f X ,Y (u, y) du
-•
FX(x | Y = y) = (4.50)
fY ( y)
Taking differentiation on both sides
f X ,Y ( x, y)
fX(x | Y = y) = (4.51)
fY ( y)
for every y such that fY(y) π 0.
Similarly, we can also show that
f X ,Y ( x, y)
fY(y | x) = (4.52)
fX ( x)
M
fY(y) = Â p( yi ) d ( y - y j ) (4.54)
j =1
Interval Conditioning Consider an event A defined in the interval A = {y1 < Y £ y2} where y1 and y2
are real numbers. We assume that p(A) = P(y1 < Y £ y2) π 0. Then using Eq. (4.44),
FX(x | (y1 < Y £ y2)) = P(X £ x | (y1 < Y £ y2))
P( X £ x « ( y1 < Y £ y2 ))
= (4.61)
P( y1 < Y £ y2 )
FX ,Y ( x, y1 < Y £ y2 )
= (4.62)
FY ( y1 < Y £ y2 )
FX ,Y ( x, y2 ) - FX ,Y ( x, y1 )
= (4.63)
FY ( y2 ) - FY ( y1 )
4.52 Probability Theory and Random Processes
y2 x
ÚÚ f X ,Y (u, y) du dy
y1 - •
= y2 •
(6.64)
ÚÚ f X ,Y (u, y) du dy
y1 - •
Ú f X ,Y ( x, y) dy
y1
fX(x | y1 < Y £ y2) = y2 •
(4.65)
ÚÚ f X ,Y ( x, y) dx dy
y1 - •
x1
3. FX(x1 | A) = Ú f X ( x | A) dx (4.68)
-•
x2
4. P(x1 < A < x2 | A) = Ú f X ( x | A) dx (4.69)
x1
REVIEW QUESTIONS
13. Define conditional distribution and density functions of two random variables X and Y.
14. Define and explain conditional probability density function. Give its properties.
Solved Problems
4.34 A product is classified according to the number of defects it contains (X1) and the factors that
produce it (X2). The joint probability distribution is
X2
X1 1 2
0 1/8 1/16
1 1/16 1/16
2 3/16 1/8
1/8 1/4
Solution
(a) The marginal distribution function X1 can be obtained by finding the sum of each row.
1 1 3 1 1 1
P(X1 = 0) = + = ; P( X1 = 1) = + =
8 16 16 16 16 8
3 1 5 1 1 3
P(X1 = 2) = + = ; P( X1 = 3) = + =
16 8 16 8 4 8
P( X1 = 0, X 2 = 1)
(b) P(X1 = 0) | (X2 = 1) =
P( X 2 = 1)
P(X2 = 1) = Sum of first column
1 1 3 1 1
= + + + =
8 16 16 8 2
1/8 1
P(X1 = 0 | X2 = 1) = =
1/2 4
1/16 1
P(X1 = 1 | X2 = 1) = =
1/2 8
P( X1 = 2, X 2 = 1) 3/16 3
P(X1 = 2 | X1 = 1) = = =
P( X 2 = 1) 1/2 8
P( X1 = 3, X 2 = 2) 1/4 1
P(X1 =3 | X2 = 2) = = =
P( X 2 = 2) 1/2 4
4.35 The input and output of a communication system are two random variables X and Y respectively.
The joint probability mass function of X and Y is given by a matrix below.
P(X, Y) =
Solution
P( X = 1, Y = 1)
(a) P(Y = 1 | X = 1) =
P( X = 1)
1
P(X = 1, Y = 1) =
16
P(X = 1) = Sum of third row of the matrix
1 1 1
= +0+ =
16 16 8
4.54 Probability Theory and Random Processes
1/16 1
P(Y = 1 | X = 1) = =
1/8 2
P( X = 1, Y = 1)
(b) P(X = 1 | Y = 1) =
P(Y = 1)
P(Y = 1) = Sum of third column of the matrix
1 7 1 3
= + + =
4 16 16 4
1/16 1
P(X = 1 | Y = 1) = =
3/4 12
4.36 The joint pmf of random variables (X, Y) is shown in the table. Find (a) P(X = 1) (b) P(Y = 1) (c)
P(X = 1 | Y = 1)
Y –1 0 1
X
1
–1 — 0 0
4
1 1 1
0 — — —
8 4 8
1
1 0 0 —
4
Solution The pmf of X can be obtained by computing the row sums, whereas the pmf of Y can be obtained
by computing the column sums
1 1
P(X = 1) = 0 + 0 + = (Sum of third row)
4 4
1 1 3
P(Y = 1) = 0 + + = (Sum of third column)
8 4 8
P( X = 1, Y = 1) 1/4 2
P(X = 1 | Y = 1) = = =
P(Y = 1) 3/8 3
4.37 Find the marginal pmfs for the pair of random variables with the joint pmf shown below.
Y –1 0 1
X
1 1
–1 — 0 —
6 6
1
0 0 — 0
3
1 1
1 — 0 —
6 6
Find the probability of the events
A = {X £ 0}, B= {X = –Y}, FX(x), FX(y)
Multiple Random Variables 4.55
Solution The pmf of X can be obtained by computing the row sum, and pmf of Y can be obtained by
computing the column sum.
1 1 1
P(X = –1) = +0+ =
6 6 3
1 1
P(X = 0) = 0 + +0=
3 3
1 1 1
P(X = 1) = +0+ =
6 6 3
1 1 1
Similarly, P(Y = –1) = +0+ =
6 6 3
1 1
P(Y = 0) = 0 + +0+=
3 3
1 1 1
P(Y = 1) = +0+ =
6 6 3
P(X £ 0) = Sum of first two rows of the table
1 1 1 2
=0+ +0+ +0+ =
3 6 6 3
P(X = –Y) = P(X = 0, Y = 0) + P(X = –1, Y = 1) + P(X = 1, Y = –1)
1 1 1 2
= + + =
3 6 6 3
The joint distribution function can be written as
1 1
FX,Y(x, y) = u( x + 1) u( y + 1) + u( x + 1) u( y - 1)
6 6
1 1 1
+ u( x ) u( y) + u( x - 1) u( y + 1) u( x - 1) u( y - 1)
3 6 6
FX(x) = FX,Y(x, •)
1 1 1 1 1
= u( x + 1) + u( x + 1) + u( x ) + u( x - 1) + u( x - 1)
6 6 3 6 6
1 1 1
= u( x + 1) + u( x ) + u( x - 1)
3 3 3
FY(y) = FX,Y(x, y)
1 1 1 1 1
= u( y + 1) + u( y - 1) + u( y) + u( y + 1) + u( y - 1)
6 6 3 6 6
1 1 1
= u( y + 1) + u( y) + u( y - 1)
3 3 3
4.56 Probability Theory and Random Processes
4.38 Two discrete random variables X and Y have the joint probability density function
l x e - l p y (1 - p) x - y
P(X, Y) = , y = 0, 1, 2 x, x = 0, 1, 2,
y ! ( x - y)!
where l, p are constants with l > 0 and 0 < p < 1. Find (a) marginal probability density functions of X and
Y. (b) The conditional distribution function of Y for a given X and of X for a given Y.
Solution
(a) The marginal distribution function of X is
pX(x) = Â P( X , Y )
y
x
l x e - l p y (1 - p) x - y
= Â y !( x - y)!
y=0
l x e- l x
x ! p y (1 - p) x - y l x e - l x
Ê xˆ
=
x!
 y !( x - y)!
=
x!
 ÁË y˜¯ p y (1 - p) x - y
y=0 y=0
x -l
l e
= (1 - p + p) y
x!
x -l
= l e x = 0, 1, 2,
x!
Note that X is a random variable with Poisson distribution.
The marginal distribution function of Y is
pY(y) = Â P( X , Y )
x
•
l x e - l p y (1 - p) x - y
= Â y ! ( x - y)!
x= y
e- l p y l y •
l x - y (1 - p) x - y
=
y!
 y! ( x - y)!
x=y
( l p) y e - l •
[ l (1 - p)]x - y
=
y!
 ( x - y)!
x= y
(l p) y e - l l (1 - p ) e - pl (l p) y
= e = ; y = 0, 1, 2
y! y!
Note that Y is a random variable with Poisson distribution with parameter lp.
(b) The conditional probability distribution of Y given X is
pX ,Y ( x, y) l x e - l p y (1 - p) x - y x!
P(Y = y) | X = x) = =
pX ( x ) y ! ( x - y)! l e- l
x
Ê xˆ y x-y
= Á ˜ p (1 - p) , x > y
Ë y¯
Multiple Random Variables 4.57
l x e - l p y (1 - p) x - y y!
= ◊ - pl
y ! ( x - y)! e ( l p) y
e - (1 - p ) l l x - y (1 - p) x - y
=
( x - y)!
e - (1 - p ) l [ l (1 - p) x - y
=
( x - y)!
4.39 A binary communication channel carries data as one of the two types of signals denoted by 0 and
1. Due to noise, a transmitted ‘0’ is sometimes received as 1 and a transmitted 1 is sometimes received
as ‘0’. For a given channel, assume a probability of 0.94 that a transmitted ‘0’ is correctly received as a
‘0’ and a probability of 0.91 that a transmitted 1 is received as 1. Further, assume a probability of 0.45 of
transmitting a ‘0’. If a signal is sent, determine (a) probability that a 1 is received, (b) probability that a ‘0’
is received, (c) probability that a 1 was transmitted, given that a 1 was received, (d) probability that a ‘0’
was transmitted, given that a ‘0’ was received, and (e) probability of error.
Solution Let us denote the transmitted message as X and received message as Y.
Probability that a transmitted ‘0’ is correctly received as ‘0’ is P(Y = 0 | X = 0) = 0.94
Similarly, P(Y = 1 | X = 0) = 1 – 0.94 = 0.06
P(Y = 1 | X = 1) = 0.91
P(Y = 0 | X = 1) = 0.91 = 0.09
P(X = 0) = 0.45; P(X = 1) = 1 – 0.45 = 0.55
P(X = 0, Y = 0) = P(Y = 0 | X = 0) P(X = 0)
= (0.94) (0.45) = 0.423
P(X = 0, Y = 1) = P(Y = 1 | X = 0) P(X = 0)
= (0.06) (0.45) = 0.027
P(X = 1, Y = 0) = P(Y = 0 | X = 1) P(X = 1)
(0.09) (0.55) = 0.0495
P(X = 1, Y = 1) = P(Y = 1 | X = 1) P(X = 1)
= (0.91) (0.55) = 0.5005
The joint probability matrix is
Y
0 1
X
P(X, Y) 0 0.423 0.027
1 0.0495 0.5005
4.58 Probability Theory and Random Processes
Ú Ú Ú k ( x + y + z) dx dy dz = 1
0 0 0
1 1 1
Ê1 ˆ Ê1 1 ˆ
= kÚ Ú ÁË 2 + y + z˜¯ dy dx = k Ú ÁË 2 + 2 + z˜¯ dz
0 0 0
Ê 1ˆ
z2 2
= k Á1 + ˜ =1 fi k=
ÁË 2 ˜
0¯
3
2
fi fX,Y,Z (x, y, z) = ( x + y + z)
3
f X ,Y , Z ( x, y, z )
(b) fX(x | y, z) =
fY , Z ( y, z )
1
2 2Ê 1ˆ
3 Ú0
fY,Z(y, z) = ( x + y + z ) dx = Á y + z + ˜
3Ë 2¯
2
( x + y + z)
3 x+y+z
fX(x | y, z) = =
2Ê 1ˆ 1
Á y+z+ ˜ y+z+
3Ë 2¯ 2
Multiple Random Variables 4.59
1
2 2Ê 1ˆ
fX,Y(x, y) = Ú ( x + y + z ) dz = Á x + y + ˜
30 3Ë 2¯
f X , Y , Z ( x, y , z ) x+y+z
fX(z| x, y) = =
f X ,Y ( x, y) 1
x+y+
2
•
(c) fX(x) = Ú f X ,Y ( x, y) dy
-•
1
2 Ê 1ˆ 2Ê 1 1ˆ 2
3 Ú0 ÁË
= x+y+ ˜ dy = Á x + + ˜ = ( x + 1)
2¯ 3Ë 2 2¯ 3
• 1
2 Ê 1ˆ 2
fY(y) = Ú f X ,Y ( x, y) dx =
3 Ú0 ÁË
x + y + ˜ dx = ( y + 1)
2¯ 3
-•
•
fZ(z) = Ú fY , Z ( y, z ) dy
-•
1
2 Ê 1ˆ 2
3 Ú0 ÁË
= y + z + ˜ dy = ( z + 1)
2¯ 3
Solution We know
f X ,Y , Z ( x, y, z )
fZ(z | x, y) =
f X ,Y ( x, y)
fi fX,Y,Z(x, y, z) = fZ(z | x, y) fX,Y(x, y)
f X ,Y ( x, y)
Also, fY(y | x) =
fX ( x)
4.42 The joint density function of the random variables X and Y is given by
fXY(x, y) = 8xy 0 < x < 1; 0 < y < x
Ê 1 1ˆ
Find P Á Y < X < ˜ . Also find the conditional density function fY(Y/X).
Ë 8 2¯
4.60 Probability Theory and Random Processes
Solution
Given: fX,Y(x, y) = 8xy 0 < x < 1; 0 < y < x
= 0 elsewhere
Ê 1 1ˆ
PÁX < ,Y < ˜
Ê 1 1ˆ Ë 2 8¯
P ÁY < X< ˜=
Ë 8 2¯ Ê 1ˆ
PÁX < ˜
Ë 2¯
Ê 1 1ˆ
PÁX < ,Y < ˜ = ?
Ë 2 8¯
y=x
y<x
1
x=y 1 y =—
— 8
8
1 x
—
2
x=1
1
x =—
2
Fig. 4.21
Ê 1ˆ Ê 1ˆ
From the figure we can find the interval of integration for x and y are Á y, ˜ and Á 0, ˜ respectively.
Ë 2 ¯ Ë 8¯
1/8 1/2
Ê 1 1ˆ
P Á X < , Y < ˜ = Ú Ú 8 xy dx dy
Ë 2 8¯ 0 y
1/8 Ê 1/2 ˆ
x2
=8 Ú Á
ÁË 2
˜ y dy
˜¯
0 y
1/8 1/8
1 Ê1 ˆ Êy 3ˆ
=8 Ú 2 ÁË 4
- y 2 ˜ y dy = 4
¯ Ú ÁË 4 - y ˜¯ dy
0 0
Ï 1/8 1/8 ¸
1 y2 y4 Ê 1 1 ˆ
= 4 ÔÌ -
Ô
˝ = 4Á - ˜¯
ÔÓ 4 2 4 Ô˛ Ë 5 / 2 4(4096)
0 0
1 1 31
= - =
128 4096 4096
Multiple Random Variables 4.61
31
=
4096
The marginal density function of X is given by
•
fX(x) = Ú f X ,Y ( x, y) dy
-•
x x
y2
= Ú 8 xy dy = 8 x
0
2
0
= 4 x3; 0 < x < 1
1/2 1/2
Ê 1ˆ x4 1
PÁX < ˜ = Ú 4 x dx = 4 =
3
Ë 2¯ 0
4 16
0
Ê 1 1ˆ
PÁX < ,Y < ˜
Ê 1 1ˆ Ë 2 8¯
P ÁY < X< ˜=
Ë 8 2¯ Ê 1ˆ
PÁX < ˜
Ë 2¯
Ê 31 ˆ Ê 1 ˆ
= Á
Ë 4096 ˜¯ ÁË 16 ˜¯
31
=
256
FX ,Y ( x, y)
F(Y/X) =
fX ( x)
8 xy
=
4 x3
= 2y/x2
Practice Problems
Ê 1 1 1 1 2(1 + x )2 ˆ
ÁË Ans : (a) 1 - 1 + x + 1 + x + y - 1 + y , (b) (1 + x )2 , (c) ˜
(1 + x + y)3 ¯
4.62 Probability Theory and Random Processes
Solved Problem
Solution
xy
(a) fX,Y(x, y) = x2 +
3
• 2
Ê xy ˆ
fX(x) = Ú f X ,Y ( x, y) dy = Ú Á x 2 + ˜ dy
Ë 3¯
-• 0
2
2 x y2
= x 2 ( y) +
0 3 2
0
2x
= 2 x2 + 0 < x <1
3
The marginal density function of Y is
•
fY(y) = Ú f X ,Y ( x, y) dx
-•
1
Ê xy ˆ
Ú ÁË x +
2
= dx
0
3 ˜¯
1 y Ê 1ˆ 1 y
+ = + ;0 < y <1
3 3 ÁË 2 ˜¯ 3 6
=
• 1
Ê 1ˆ Ê 2x ˆ
P Á X > ˜ = Ú f X ( x ) dx = Ú Á 2 x 2 + dx
Ë 2 ¯ 1/2 1/2
Ë 3 ˜¯
2 3 1 1 2 1
= x + x
3 1/2 3 1/2
2 Ê 7 ˆ 1 Ê 3 ˆ 14 + 6 5
= + = =
3 ÁË 8 ˜¯ 3 ÁË 4 ˜¯ 24 6
Multiple Random Variables 4.63
Fig. 4.22
From the figure, we can find that the interval of integration for y is from 0 to x; whereas for x, it is
from 0 to 1.
1 x
Ê xy ˆ
Ú Ú ÁË x +
2
P(Y < X) = dy dx
0 0
3 ˜¯
1
1
Ê 3 x3 ˆ 7 x4 7
= Ú ÁË x + 6 ˜¯ dx = 6 4 =
24
0 0
È 1 1ˆ
(c) P ÍY < X< ˜
Î 2 2¯
Ê 1 1ˆ
PÁX < ,Y < ˜
Ë 2 2¯
=
Ê 1ˆ
PÁX < ˜
Ë 2¯
1/2 1/2
Ê 1 1ˆ Ê xy ˆ
PÁX < ,Y < ˜ = Ú Ú ÁË x +
2
dx dy
Ë 2 2¯ 0 0
3 ˜¯
1/2
Ê 1 yˆ
= Ú ÁË 24 + 24 ˜¯ dy
0
1/2
1 1 y2 4 +1 5
= + = =
48 24 2 192 192
0
1/2
È 1˘ Ê 2x ˆ 2 Ê 1ˆ 2 Ê 1ˆ 1
P ÍX < ˙ = Ú ÁË 2 x + dx = Á ˜ + Á ˜ =
2
Î 2˚ 0
3 ˜¯ 3 Ë 8¯ 3 Ë 6¯ 6
4.64 Probability Theory and Random Processes
Ê 1 1ˆ
PÁX < ,Y <
È 1 1˘ Ë 2 2 ¯˜
p ÍY < X< ˙=
Î 2 2˚ Ê 1ˆ
PÁX < ˜
Ë 2¯
5 / 192 5
==
1/ 6 32
The conditional density function
f X ,Y ( x, y)
fY(y/x) =
fX ( x)
xy
x2 +
3 = 3x + y
=
2 x 6x + 2
2 x3 +
3
The conditional density function must satisfy
•
Ú fY ( y /x )dy = 1
-•
2 2 2
Ê 3x + y ˆ 1
fi Ú fY ( y / x )dy = Ú Á
Ë 6 x + 2 ˜¯ dy = 6 x + 2 Ú (3 x + y) dy
0 0 0
Ï 2¸
È
1 Ô 2 y2 Ô 6 22 ˘
= Ì3 x y + ˝= Í6 x + ˙ =1
6x + 2 Ô 0 2 Ô 6x + 2 ÍÎ 2 ˙˚
Ó 0˛
f X ,Y ( x, y)
fX(x/y) =
fY ( y)
xy
x2 +
3 = 3 x + xy Ê 6 ˆ = 2(3 x + xy)
2 2
= Á ˜
1 y 3 Ë 6 + 3y ¯ y+2
+
3 6
6 x 2 + 2 xy
=
y+2
• 1
Ê 6 x 2 + 2 xy ˆ
Ú f X ( x /y) dx = Ú Á
y + 2 ˜¯
dx
-• 0Ë
È 1 1˘
1 Í x3 x2 ˙
= 6 + 2y
y+2 Í 3 2 ˙
Î 0 0˚
1
= [2 + y] = 1
y+2
Multiple Random Variables 4.65
Practice Problems
y y3
Find fY(y | X = x). (Ans. 0.5 + 0.75 - 0.25 3 )
x x
Solved Problems
4.44 The joint density function of two random variables X and Y is given by
FX,Y(x, y) = a2 e–a(x + y) u(x) u(y).
(a) Find the conditional density functions fX(x | Y = y) and fY(y | X = x). (b) Are the random variables X and
Y statistically independent.
Solution
(a) Given fX,Y(x, y) = a2 e–a(x + y) u(x) u(y)
The conditional density function
f X ,Y ( x, y)
fX(x | Y = y) =
fY ( y)
From Solved Problem (4.31), we have
fX(x) = a e–ax u(x) and
fY(y) = a e–ay u(y)
a 2 e - a ( x + y )u( x )u( y)
fX(x | Y = y) = = ae - ax u( x )
ae - ay u( y)
f X ,Y ( x, y) a 2 e - a ( x + y )u( x )u( y)
fY(y | X = x) = = = ae - ay u( y)
fX ( x) ae - ay u( x )
(b) Since fX(x | Y = y) = fX(x)
and
fY(y | X = x) = ae–ay u(y)
X and Y are statistically independent
2
+ y2 )
4.45 The joint pdf of the random variable (X, Y) is given by fX.Y(x, y) = K xy e - ( x , x > 0, y > 0 .
Find the value of K and prove also that X and Y are independent.
Solution We know for a valid pdf.
• •
Ú Ú f X ,Y ( x, y) dx dy = 1
-• - •
Since x > 0 and y > 0, we can write
••
2
+ y2 )
Ú Ú K xy e-( x dx dy = 1
0 0
Multiple Random Variables 4.67
È• 2
•
2 ˘
K Í Ú x e - x dx Ú y e - y dy ˙ = 1
ÎÍ 0 0 ˙˚
•
- x2
Consider Úxe dx
0
Let x2 = t fi 2x dx = dt
• •
2 1 -t 1 • 1
-x
fi Úxe
dx
= Ú e dt = (-e - t ) =
0
2 0 2 0 2
•
- y2 1
Similarly, Úye dy =
2
0
Ê 1ˆ Ê 1ˆ
fi KÁ ˜ Á ˜ =1
Ë 2¯ Ë 2¯
Therefore, K = 4
The marginal pdf of X is given by
•
fX(x) = Ú f X ,Y ( x, y) dy
0
•
- ( x 2 + y2 )
= Ú 4 xy e dy
0
•
2
- y2
= 4 x e- x Úye dy
0
2 Ê 1ˆ 2
= 4 x e- x Á ˜ = 2 x e- x x > 0
Ë 2¯
The marginal density function of Y is given by
•
fY(y) = Ú f X ,Y ( x, y) dx
0
•
2
- x2
= 4 y e- y Úxe y>0
0
2 Ê 1ˆ 2
= 4 y e - y Á ˜ = 2 ye - y y > 0
Ë 2¯
The joint pdf satisfies
fX,Y (x, y) = fX (x) fY(y)
Therefore, X and Y are independent.
4.68 Probability Theory and Random Processes
Solution
(a) For joint pdf, we know
• •
Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
•• • Ê -x •ˆ •
•
-( x + y) -y e
fi ÚÚK e dx dy = K Ú e Á ˜ dy = K Ú e - y = - K e - y K
0 0 0 ÁË -1)
( ˜
0 ¯ 0
0
fi k =1
(b) The marginal density function fX(x) is given by
•
fX(x) = Ú f XY ( x, y) dy
-•
• •
-( x + y) -x e- y
= Ú e dy = e = e- x
0
-1
0
•
fY(y) = Ú f X ,Y ( x, y) dx
-•
• •
e- x
= Ú e - ( x + y ) dx = e - y = e- y
-•
( -1)
0
ÏÔe -x
for x ≥ 0
fi fX(x) = Ì
ÔÓ 0 elsewhere
ÏÔe - y for x ≥ 0
fY(y) = Ì
ÔÓ 0 elsewhere
P(0 £ x £ 2, 2 £ y £ 3)
2 3
e- x e- y
3 3
-y -2 -y -y
= Úe dy = (1 - e ) Ú e dy = (1 - e ) = (1 - e -2 ) (e -2 - e -3 )
2
-1 2
-1
0 2
(d) From the given fXY (x, y) and marginal pdfs, we can easily observe that
fX,Y (x, y) = fX (x) fY (y)
Hence, X and Y are independent.
Multiple Random Variables 4.69
Practice Problem
Solved Problems
4.47 Radha and Mohan decide to meet at a park between 5.00 p.m. and 6 p.m. They arrive independently
and their arrival time is uniformly distributed. Find the probability that the first to arrive has to wait longer
than 15 minutes.
Solution Let R and M denote the time past 5 that Radha and Mohan arrive at the park. Since Radha and
Mohan meet at a park between 5.00 p.m. and 6.00 p.m. and arrival time is uniformly distributed, we can write
the pdf of arrival times of both Radha and Mohan as
1
fR(r) = for 0 £ r £ 60
60
= 0 otherwise
1
fM(m) = 0 £ m £ 60
60
= 0 otherwise
Here, we consider two cases (a) Mohan arrives at the park before Radha (b) Radha arrives at the park
before Mohan.
In the first case, the probability can be expressed as P(M + 15 < R), and for second case, the probability
can be expressed as P(R + 15 < M).
Therefore, the desired probability is P(M + 15 < R) + P(R + 15 < M).
60 r - 15
= Ú Ú f M (m) f R (r ) dm dr
15 0
2 60 r - 15 2 60
Ê 1ˆ Ê 1ˆ
= Á ˜
Ë 60 ¯ Ú Ú dr dm = Á ˜
Ë 60 ¯ Ú (r - 15) dr
15 0 15
2 È 60 ˘
Ê 1 ˆ r2
= Á ˜ Í ˙
60
- 15 r
Ë 60 ¯ Í 2 15 ˙
Î 15 ˚
Ê 1 ˆ È (60) - (15) ˘
2 2 2
= Á ˜ Í - 15 (60 - 15)˙ = 0.28125
Ë 60 ¯ ÎÍ 2 ˚˙
4.70 Probability Theory and Random Processes
Similarly,
60 m - 15
= Ú Ú f M (m) f R (r ) dr dm
15 0
2 60
Ê 1ˆ
= Á ˜
Ë 60 ¯ Ú (m - 15) dm
15
2 È 2 60 ˘
= Ê 1 ˆ Ím - 15 m 15 ˙ = 0.28125
60
ÁË 60 ˜¯ Í 2 ˙
Î 15 ˚
fi The probability
= 0.28125 + 0.28125
= 0.5625
•
1 - | x | - | y|
= Ú 4
e dy
-•
1 -| x | È ˘ 1 È0 y ˘
• •
= e Í Ú e -| y| dy ˙ = e -| x| Í Ú e dy + Ú e - y dy ˙
4 Í- • ˙ 4 ÍÎ - • ˙˚
Î ˚ 0
1 -| x | È y 0 •˘
= e ÍÎe - e- y ˙˚
4 -• 0
1 -| x |
= e
2
Multiple Random Variables 4.71
•
fY (y) = Ú f X ,Y ( x, y) dx
-•
•
1 - | x | - | y| 1 ÏÔ • • ¸Ô
= Ú e dx = e -| y| -x
Ì Ú e dx + Ú e dx ˝
x
-•
4 4 ÔÓ - • 0 Ô˛
1 - | y|
=e
2
From the marginal and joint density functions, we can observe
fX,Y (x, y) = fX (x) fY (y)
Hence, X and Y are independent.
0 1
(b) P(X £ 1, Y £ 0) = Ú Ú f X , Y ( x, y) dx dy
-• -•
0 1
1 - | x | - | y|
= Ú Ú 4
e dx dy
-• -•
1
0
- | y|
È0 x 1 ˘
= Ú e Í Ú e dx + Ú e - x dx ˙ dy
4 -• ÍÎ -• 0 ˙˚
0
1 - | y|
= Úe [2 - e -1 ] dy
4 -•
1 È0 ˘ 1
(2 - e -1 ) Í Ú e y dy ˙ = (2 - e -1 )
=
4 Í- • ˙ 4
Î ˚
1 -1
P(X £ 1, Y £ 0) = (2 - e )
4
Solution
Ïk e - ( ax + by ) x > 0, y > 0
Given: fX,Y (x, y) = ÔÌ
ÔÓ 0 otherwise
We know,
• •
Ú Ú f X , Y ( x, y) dx dy = 1
-• -•
4.72 Probability Theory and Random Processes
•• ••
ÚÚ k e - ( ax + by ) dx dy = k ÚÚe
- ax
e - by dx dy
0 0 0 0
• Ê - by •ˆ
e
= k
Ú e - ax Á ˜ dx
0 ÁË - b 0
˜¯
• •
k k e - ax k
= Ú e - ax dx = =
b 0
b ( - a) ab
0
k
We know = 1 fi k = ab
ab
The marginal density function of X is given by
•
fX (x) = Ú f X ,Y ( x, y) dy
-•
•
- ( ax + by )
= Ú ab e dy
0
• •
e - by
= ab e - ax Ú e - by dy = ab e - ax
0
-b
0
- ax Ê 1 ˆ
= ab e Á ˜ = a e– ax
Ë b¯
fi fX (x) = a e– ax x > 0
= 0 otherwise
•
fY (y) = Ú f X ,Y ( x, y) dx
-•
• •
e - ax
= Ú ab e - ax e - by dx = ab e - by
-•
-a
0
= b e– by
fi fY (y) = b e– by for y > 0
= 0 otherwise
Given: fX,Y (x, y) = ab e– ax e– by
Since fX,Y (x, y) = fX (x) fY (y),
X and Y are independent.
Multiple Random Variables 4.73
• x
- ( ax + by )
(c) P(X > Y) = Ú Ú ab e dy dx
0 0
• Ê - by xˆ
e
= ab Ú e - ax Á ˜ dx
Á -b ˜
0 Ë 0¯
•
ab - ax
= Úe (1 - e - bx ) dx
b 0
È - ax • •˘
e e- (a + b) x
= aÍ - ˙
Í -a - ( a + b) ˙
Î 0 0 ˚
1 b
= 1- =
a+b a+b
b
P(X > Y) =
a+b
4.50 (X, Y) is a bivariate random variable in which X and Y are independent. If X is a uniform random
variable over (0, 0.5) and Y is an exponential random variable with parameter l = 4, find the joint pdf of
(X, Y).
Solution X is a random variable with uniform distribution with b = 0.5 and a = 0. Hence,
1
fX (x) = = 2 for 0 £ x £ 0.5
0.5 - 0
Y is an exponential random variable with l = 4
fY (y) = (4)2 e– 4y y>0
– 4y
= 16 e y>0
Since X and Y are independent,
fX,Y (x, y) = fX (x) fY (y)
= 2 (16 e– 4y)
= P(X + Y £ z) = ÚÚ f X ,Y ( x, y) dx dy (4.78)
S
Since X and Y are independent,
FZ (z) = ÚÚ f X ( x ) fY ( y) dx dy (4.79)
S
where S is the area to the left of the line X + Y = Z
FZ (z) = ÚÚ f X ( x ) fY ( y)dx dy
X +Y £ Z
• z-y
=
Ú Ú f X ( x ) fY ( y) dx dy
-• -•
• z-y
= Ú Ú f X ( x ) dx fY ( y) dy
-• -•
•
= Ú FX ( z - y) fY ( y) dy (4.80) Fig. 4.23
-•
The pdf of z is obtained by differentiating the CDF. The cumulative distribution is called the convolution
of the distributions FX (x) and FY (y). Differentiating Eq. (4.80), we get
fZ (z) = fX + Y (z) (4.81)
d È ˘
•
= Í Ú FX ( z - y) fY ( y) dy ˙
dz Í - • ˙
Î ˚
•
d
= Ú F ( z - y) fY ( y) dy
dz X
-•
= Ú f X ( z - y) fY ( y) dy (4.82)
-•
The above expression is known as convolution integral. That is, the density function of the sum of two
statistically independent random variables is the convolution of their individual density functions.
4.10.1 Sum of Several Random Variables
Consider the sum of independent random variables X1, X2, X3, º, XN which is expressed as
Y = X1 + X2 + º + XN
Let Y1 = X1 + X2
Then the CDF of Y1 is the convolution of the CDFs of X1 and X2. That is,
fY1 ( y1 ) = f X1 ( x1 ) * f X2 ( x2 )
Multiple Random Variables 4.75
REVIEW QUESTIONS
15. What is the probability density function of the sum of two random variables?
16. Explain the method of finding the distribution and density functions for a sum of statistically
independent random variables.
Solved Problems
4.51 A continuous random signal X is transmitted over a channel. At the receiver, the received signal Y
consists of additive noise N. That is, Y = X + N. Find the distribution and density of Y if X and N are jointly
continuous random variables.
4.76 Probability Theory and Random Processes
• y-x
∂
fY (y) =
∂y Ú Ú f X , N ( x, n) dn dx
-• -• Fig. 4.24
• •
∂
= Ú ∂y Ú f X , N ( x, n) dn dx
-• -•
•
= Ú f X , N ( x, y - x ) dx
-•
where xm and xk are constants. In the above integral, the impulses coincide when z = xm + xk. Therefore, the
integral equals k d (z – xm – xk) where k = k1 k2
fZ (z) = 0.04 d (z – 6) + 0.08 d (z – 7) + 0.16 d (z – 8) + 0.12 d (z – 9)
+ 0.05 d (z – 7) + 0.1 d (z – 8) + 0.2 d (z – 9) + 0.15 d (z – 10)
+ 0.01 d (z – 8) + 0.02 d (z – 9) + 0.04 d (z – 10) + 0.03 d (z – 11)
Multiple Random Variables 4.77
4.53 Find the density of W = X + Y where the densities of X and Y are assumed to be
fX (x) = u(x) – u(x – 1); fY (y) = u(y) – u(y – 1)
Solution The pdfs of random variables X and Y are shown in Fig. 4.25(a)
fX(x) fY(y)
1.0 1.0
0 1 x 0 1 x
Fig. 4.25(a)
Given: W = X + Y
The pdf of the sum of two random variables is the convolution of their individual density functions. That
is,
•
fW (w) = Ú fY ( y) f X (w - y) dy
-•
fX(–y) fX(w – y)
–1 0 w –1 w
Fig. 4.25(b)
w –1 w 0 1
Fig. 4.25(c)
From Fig. 4.25(c) observe we that fW (w) = 0 for w < 0, since the functions fX (w – y) and fY (y) do not
overlap.
For 0 < w < 1, the function fX (w – y) and fY (y) are drawn in Fig. 4.25(d).
fX(w – y) fY(y)
w –1 0 w 1
Fig. 4.25(d)
4.78 Probability Theory and Random Processes
4.54 If X and Y are two random variables which are Gaussian, if a random variable Z is defined as
Z = X + Y, find fZ (z).
•
1 2 2
= Ú e- ( z - y) /2
e- y /2
dy
2p -•
Multiple Random Variables 4.79
• •
1 2
- 2 yz + y2 + y2 )2 /2 1 2
- 2 yz + 2 y2 /2
= Ú e- ( z = Ú e- ( z dy
2p -•
2p -•
•
1 2
2 y - z / 2 )2 /( 2 )2 2
= Ú e- z /4
e- ( ez /4
dy
2p -• Let
z
• 2 y- =p
1 2 2 2
= Ú e- z /4
e- ( 2 y - z / 2 ) /2
dy
2p -• 2 dy = dp
• dp
1 Ê 1 ˆ 1 2 2 dy =
= Á ˜ Ú e- z /4
e- P /2
dp 2
2p Ë 2 ¯ 2p -•
2 •
e- z /4
1 2
= Ú e- P /2
dp
2p 2p 2 -•
1 2
ÏÔ 1 •
2
¸Ô
e- z /4
Ì Ú e- P /2
dp ˝
= 2p 2 ÔÓ 2p -• Ô˛
=1
1 1 2 1 2
/2( 2 )2
= e- z /4
= e- z
2p 2 2p 2
That is, s = 2 . So Z is also a Gaussian random variable with zero mean and variance 2.
4.55 If X and Y are independent random variables with density functions fX (x) = e–x u(x) and fY (y) = e–2y
u(y), find the density function of Z = X + Y.
Solution Given:
fX (x) = e–x u(x); fY (y) = e–2y u(y)
•
fZ (z) = Ú f X ( x ) fY ( z - x ) dx
-•
z
-x
= Úe e - 2 ( z - x ) dx
0
z
= e- 2 z dx = e - 2 z e x = e - 2 z (e z - 1)
z
Úe
x
0
0
= e–z –e–2z
fZ (z) = e–z –e–2z for z ≥ 0
= 0 elsewhere
4.80 Probability Theory and Random Processes
4.56 Two independent random variables X and Y are having their densities as fX (x) = e–x u(x) and fY (y)
= e–y u(y). Find P(X + Y £ 1).
0 0
1 1
P(z £ 1) = Ú fZ ( z ) dz = Ú ze - z dz
0 0
1 1
= - ze - z - e- z
0 0
= 1 – 2e–1
4.57 Two independent random variables X and Y have the probability density functions respectively as
fX (x) = xe–x, x > 0 fY (y) = 1 0£y£1
Calculate the probability distribution and density functions of the random variable Z = X + Y.
Solution
fX (x) = xe–x, x > 0
fY (y) = 1, 0 £ y £ 1
= 0 otherwise
Given: Z =X+Y
•
fZ (z) = Ú f X ( x ) fY ( z - x ) dx
-•
Multiple Random Variables 4.81
z z
= - xe - x - e- x
0 0
= 1 – e–z (z + 1)
fX (x) = xe–x
1.0 fY (y)
0.4
x 0 1 y
(a)
(b)
For z < 0
fY (z – x)
1.0
z–1 z z–1 z
(c) (d)
fY (z – x)
For 0 < z < 1 1.0
1.0 For z > 1
xe–x
xe–x
z–1 0 z z–1 z
(e) (f)
Fig. 4.26
For z > 1,
z
z z
fZ (z) = Ú xe - x dx = - xe - x - e- x
z -1 z -1
z -1
4.58 X and Y are two independent random variables with uniform density over (–2, 2) and (–1, 1)
respectively.
Find P(Z < –1) where Z = X + Y.
4.82 Probability Theory and Random Processes
Solution The pdf s of X and Y random variables are shown in Fig. 4.27(a) and (b).
1
—
2
0.25
–2 0 2 –1 0 1
(a) (b)
0.5
z < –3
0.25
z–1 z+1 –2 0 2 x
(c)
0.5
–3 < z < –1
0.25
z–1 –2 z+1 2 x
(d)
0.5
–1 < z < 1
0.25
–2 z–1 0 z+1 2 x
(e)
0.5
1<z<3
0.25
–2 0 z–1 2 z+1 x
(f)
Fig. 4.27
-1
z+3
P(Z < –1) = Ú 8
dx
-3
È -1 ˘
1 Í z2 -1
= + 3z - 3 ˙
8Í 2 ˙
Î -3 ˚
1 È (- 1)2 - (- 3)2 ˘
= Í + 3(- 1) - 3(- 3)˙
8 ÍÎ 2 ˙˚
= 1
4
1
4.59 The random variables X and Y have density functions fX (x) = (u(x) – u(x – a)) and fY (y) = be–by
a
u(y) where a > 0. Find the density function of W = X + Y if X and Y are statistically independent.
1
Solution Given: fX (x) = (u(x) – u(x – a))
a
b
fX (x)
1/a b fY (y)
0 a x y z–a z x
(a) (b) (c)
z>a
0<z<a
z–a 0 z x 0 z–a z x
(d) (e)
Fig. 4.28
For z < 0,
fZ (z) = 0
For 0 < z < a,
1 - e - bz
z
1 - by - 1 - bz
fZ (z) = Ú a be dy =
a
(e - 1) =
a
For z > a, 0
z
b 1 - by z - 1 È - bz
fZ (z) = Ú e - by dy = - e = e - e - b( z - a ) ˘˚
a z-a
a z-a a Î
- bz
e
= (eba - 1)
a
4.60 Two independent random variables X and Y have densities fX (x) = 5e–5x u(x) and fY(y) = 2e–2y u(y).
Find the density of the sum Z = X + Y.
z
e- 3 x
z
= 10 e - 2 z -3x
Ú e dx = 10 e
- 2z
0
-3
0
- 10 - 2 z - 3 z - 10 - 5 z
= e [e - 1] = (e - e- 2 z )
3 3
10 - 2 z
fZ (z) = (e - e - 5 z ) u( z )
3
4.61 The probability density functions of statistically independent random variables X and Y are
1
fX (x) = 1 e - ( x - 1)/2 u( x - 1) ; fY (y) = e - ( y - 3)/4 u( y - 3) .
2 4
Find the pdf of the z = X + Y.
1 - ( x - 1)/2 1
Solution Given: fX(x) = e u( x - 1); fY ( y) = e - ( y - 3)/4 u( y - 3)
2 4
From the given pdfs, we can find fX (x) = 0 for x < 1 and fY(y) = 0 for y < 3. Since fX (x) and fY(y) are pdfs
of independent random variables X and Y we can write the pdf of X + Y as a convolution of fX (x) and fY (y).
fZ (z) = fX (x) * fY (y)
•
= Ú f X ( x ) fY ( z - x ) dx
-•
The rough sketches of fX (x) and fY (y) are shown in Fig. 4.29(a) and (b).
The sketches of fX (x) and fY (z – x) are shown on the same axis in Fig. 4.29(c).
From Fig. 4.29(d), we can observe that for z – 3 < 1 or z < 4, there is no overlap of fX (x) and fY (z – x) and
hence, the convolution of fX (x) and fY (z – x) is zero. That is,
fZ (z) = 0 for z < 4
For z > 4, both functions fX (x) and fY (z – x) are shown in Fig. 4.29(e).
fX(x)
0.5 fY(z – x) z < 4 fX(x)
fY(y)
0 1 2 3 x y z–3 0 x
0 3
(a) (b) (c)
z>4
fY(z – x) fX(x)
x
1 z–3
(d)
Fig. 4.29
4.86 Probability Theory and Random Processes
From the figure we observe that the curves overlap in the interval between 1 and z – 3. Therefore,
z-3
fZ (z) =
Ú f X ( x ) fY ( z - x ) dx
1
z-3
1 - ( x - 1)/2 1 - ( z - x - 3)/4
= Ú 2
e
4
e dx
1
z-3
1
= Ú e - x /2 e1/2 e - z /4 e x /4 e3/4 dx
8 1
z-3
1 (5 - z )/4
= e Ú e - x /4 dx
8 1
z-3
1 (5 - z )/4 e - x /4
= e
8 ( - 1/4)
1
f Xi ( xi ) = 1 [u( xi ) - u( xi - a )] ; i = 1, 2, 3
a
Find and sketch the density function of Y = X1 + X2 + X3 if a > 0 is constant.
Solution Given: Y = X1 + X2 + X3
Let X1 + X2 = z,
Then fZ (z) = f Xi ( x1 ) * f X2 ( x2 )
•
fZ (z) = Ú f X1 ( x1 ) f X2 ( z - x1 ) dx1
Fig. 4.30 (a)
-•
The pdf of xi is shown in Fig. 4.30(a).
The functions f X1 ( x1 ) and f X2 ( z - x ) are shown in Fig. 4.30(b).
1
= [ a - ( z - a )]
a2
2a - z
=
a2
Now Y = Z + X3
So fY (y) = fZ (z) * f X ( x3 )
3
4.88 Probability Theory and Random Processes
Fig. 4.30(g)
For y < 0, the functions fZ (z) and f X3 ( y - z ) do not overlap. Therefore, fY (y) = 0
For 0 < y < a,
Fig. 4.30(h)
y y y
z Ê 1ˆ 1 z2 y2
fY (y) = Ú fZ ( z ) f X3 ( y - z ) dz = Ú 2 Á ˜ dz = 3
Ë a¯ a 2
=
2a3
0 0 a 0
For a < y < 2a,
Fig. 4.30(i)
y
fY (y) = Ú fZ ( z ) f X3 ( y - z ) dz
y-a
a y
= Ú fZ ( z ) f X3 ( y - z ) dz + Ú fZ ( z ) f X3 ( y - z ) dz
y-a a
Multiple Random Variables 4.89
a y
z Ê 1ˆ Ê 2a - z ˆ 1
= Ú a 2 Á
Ë a ˜¯ dz + Ú ÁË
a2 ¯ a
˜ dz
y-a a
a È y˘
1 z2 1 z2
+ 3 Í2 az a - ˙
y
=
a3 2 a Í 2 ˙
y-a Î a˚
1 1 È ( y2 - a2 ) ˘
= 3
[ a 2 - ( y - a )2 ] + 3 Í
2a ( y - a) - ˙
2a a ÎÍ 2 ˚˙
1 1 È y2 a2 ˘
= 3
[ a 2 - ( y 2 + a 2 - 2 ay)] + 3 Í
2 ay - 2 a 2 - + ˙
2a a ÍÎ 2 2 ˙˚
1 1 È 3 y2 ˘
= 3
[2 ay - y 2 ] + 3 Í
2 ay - a 2 - ˙
2a a ÍÎ 2 2 ˙˚
(2 ay - y 2 ) + (4 ay - 3a 2 - y 2 )
=
2a3
= 6 ay - 2 y - 3a
2 2
3
2a
For 2a < y < 3a,
Fig. 4.30(j)
2a
fY (y) = Ú fZ ( z ) f X3 ( y - z ) dz
y-a
2a
Ê 2a - z ˆ Ê 1 ˆ
= Ú ÁË ˜ Á ˜ dz
a2 ¯ Ë a ¯
y-a
2a
1
=
a3
Ú (2 a - z ) dz
y-a
È 2a ˘
1 z2
= 3 Í2 a (2 a - y + a ) - ˙
a Í 2 ˙
Î y-a˚
4.90 Probability Theory and Random Processes
1 ÏÔ È (2 a )2 - ( y - a )2 ˘ ¸Ô
= Ì 2 a (3a - y ) - Í ˙˝
a 3 ÓÔ ÎÍ 2 ˚˙ ˛Ô
1 ÔÏ 2 (4 a 2 - y 2 - a 2 + 2 ay) Ô¸
= Ì6 a - 2 ay - ˝
a 3 ÓÔ 2 ˛Ô
1 1
= 3
[9a 2 - 6 ay + y 2 ] = (3a - y)2
2a 2a3
For y > 3a,
fY (y) = 0
4.63 If X and Y are random variables with respective parameters (a1, b) and (a2, b) then prove that X + Y
is gamma random variable with (a1 + a2, b).
Solution
b a1 x a1 - 1 e - bx b a2 y a2 - 1 e - by
Given: fX (x) = u( x ) and fY ( y) = u( y)
G (a1 ) G (a2 )
Let Z = X + Y
Then
•
fZ (z) = Ú f X ( z - y) fY ( y) dy
-•
y
Let x = fi dy = x dz
z
b a1 + a2 e - bz z a1 - 1 (1 - x )a1 - 1 z a2 - 1 x a2 - 1
1
fZ (z) = Ú G (a1 ) G (a2 )
zdx
0
b a1 + a2 e - bz
1
a1 - 1
= Úz (1 - x )a1 - 1 z a2 - 1 x a2 - 1 zdx
G (a1 ) G (a2 ) 0
b a1 + a2 z a1 + a2 - 1 e - bz
1
a1 - 1
= Ú (1 - x) x a2 - 1 dx
G (a1 ) G (a2 ) 0
a1 + a2 a1 + a2 - 1 - bz
b z e
= B(a2 , a1 )
G (a1 ) G (a2 )
b a1 + a2 z a1 + a2 - 1 e - bz G (a1 ) G (a2 )
=
G (a1 ) G (a2 ) G (a1 + a2 )
Multiple Random Variables 4.91
b a1 + a2 e - bz z a1 + a2 - 1
=
G (a1 + a2 )
which is gamma distribution with parameters (a1 + a2, b).
4.64 If X1, X2, º, Xn are n independent exponential random variables each having parameter b then
prove that X1 + X2 + º + Xn is gamma random variable with parameter (n, b).
f X2 ( x2 ) = be - bx2 u( x2 )
•
fZ (z) = Ú f X1 ( z - x2 ) f X2 ( x2 ) dx2
-•
z z
- b ( z - x2 )
= b2 Úe e - bx2 dx2 = b2 e - bz Ú dx2 = b
2
ze - bz for z ≥ 0
0 0
- bx3
Let t = X1 + X2 + X3 = Z + X3; we know f X3 ( x3 ) = be u( x3 )
fT (t) = fZ ( z ) * f X3 ( x3 )
•
= Ú f X3 (t - z ) fZ ( z ) dz
-•
b3 e - bt z 2
t t
- b(t - z )
= b
3
Úe ze - bz dz = b3 Ú ze - bt dz =
0 0
2
If we repeat the above process, we get
b n e - by y n - 1 b n e - by y n - 1
fY (y) = =
(n - 1)! G ( n)
which is a gamma random variable with parameter (n, b).
Solution We know,
•
fY (y) = Ú f X , Y ( x, y) dx
-•
4.92 Probability Theory and Random Processes
y
n1 - 1
= Úcx ( y - x )n2 - 1 e - y dx
0
1
n1 - 1
Let x = yz
= fY (Y) = Ú c ( yz) (1 - z )n2 - 1 y n2 - 1 e - y ydz
dx = ydz
0
1
n1 + n2 - 2
= c ye - y Ú ( y) ( z )n1 - 1 (1 - z )n2 - 1 dz
0
= c y n1 + n2 - 1 e - y B (n1 , n2 )
G (n1 ) G (n2 )
= c y n1 + n2 - 1 e - y
G (n1 + n2 )
We know,
•
Ú fY ( y) dy = 1
0
•
G (n1 ) G (n2 )
fi c Ú y n1 + n2 - 1 e - y =1
0
G (n1 + n2 )
•
y n1 + n2 - 1 e - y 1
c G (n1 ) G (n2 ) Ú =1 fi c =
fi 0
G (n1 + n2 ) G (n1 ) G (n2 )
=1
• •
fX (x) = Ú c x n1 - 1 ( y - x )n2 - 1 e - y dy = c x n1 - 1 Ú ( y - x)
n2 - 1 - y
e dy
x x
Let y – x = p fi dy = dp
• •
n -1
fX (x) = c x 1 Ú p n2 - 1 e - ( p + x ) dp = c x n1 - 1 e - x Úp
n2 - 1 - p
e dp
0 0
•
p n2 - 1 e - p dp
c G (n2 ) x n1 - 1 e - x Ú
=
0
G (n2 )
=1
1
= G (n2 ) x n1 - 1 e - x
G (n1 ) G (n2 )
x n1 - 1 e - x
=
G (n1 )
4.66 If X and Y are independent Poisson random variables with parameters l1 and l2, compute the
distribution of X + Y.
Multiple Random Variables 4.93
Solution
e - l1 l1x e - l2 l2y
Given: pX (x) = and pY ( y) =
x! y!
z
pX + Y (z) = P(X + Y = z) = Â P(X = k, Y = z – k)
k=0
z
= Â P(X = k) P(Y = z – k) Since X and Y are independent
k=0
z
e - l1 (l1 )k e - l2 (l2 )z - k
= Â k! ( z - k )!
k=0
z
(l1 )k (l2 )z - k
- (l + l )
= e 1 2 Â k ! ( z - k )!
k=0
- ( l1 + l2 ) z
e z!
=
z!
 ( l ) k ( l2 ) n - k
k ! ( z - k )! 1
k=0
- ( l1 + l2 )
e z
Ê zˆ
=
z!
 ÁË k ˜¯ (l1 )k (l2 )n - k
k=0
e - ( l1 + l2 )
= (l1 + l2 )z
z!
Therefore, z is a Poisson random variable with parameter (l1 + l2).
Practice Problems
4.23 If X and Y are independent Rayleigh random variables with common paramter s2, determine the density of
Ê 2z ˆ
Z = X/Y. Á Ans : , 0 £ z £ •˜
Ë (1 + z )
2 2
¯
4.24 The random variables X and Y are independent with experimental densities
fX(x) = ae–axu(x)and fY(y) = be–by u(y) Ê Ans : (a) {ae - az (1 - e - bz ) + be - bz )(1 - e - az )} u( z )ˆ
Á ˜
Á ab - bz - az /2 ˜
Find the density (a) max (X, Y) (b) 2X + Y.
ÁË (b) ( e - e ) u( z ) ˜¯
a - 2b
According to the central limit theorem, the variate WN has a distribution that approaches the Gaussian
distribution with mean NmX and variance NsX2 as N Æ • provided the moment generation function exists.
Proof: To prove that the random variable has Gaussian density function, we should prove that the MGF of
2
/2
W is eu .
Multiple Random Variables 4.95
x2 x3
log (1 + x) = x + + +…
2 3
Ï 2 2 ¸
Ôu È R ˘ 1 Ê u2 È R ˘ˆ Ô
log (MW(u)) = N Ì + EÍ ˙ + Á + E Í ˙˜ + …˝
ÔÓ 2 N Ë
Î N ˚ 2 2N ÎN ˚ ¯ Ô˛
2
u2 E[ R] N Ê u2 E[ R] ˆ
= + + Á + ˜ +… (4.94)
2 N 2 Ë 2N N ¯
E(R) approaches zero as N Æ •. Therefore, we have
u2
lim log[ MW (u)] =
N Æ• 2
2
lim [ MW (u)] = eu /2
(4.95)
N Æ•
Let X1, X2, º, Xn be a random sample from a random variable X. Then we write the sample mean X as
X1 + X 2 + … + X N YN
X = = (4.96)
N N
Since the sample mean is also a random variable, it has a mean value which is given by
È1 N ˘ 1 N 1 N
E[ X ] = E Í Â Xi ˙ = Â E [ Xi ] = Â mX = mX (4.97)
ÍÎ N i = 1 ˙˚ N i = 1 N i =1
That is, the mean value of the sample mean is equal to the true value. The variance of the sample mean is
equal to
È1 N ˘ 1 ÈN ˘ 1 N
Var [ X ] = Var Í Â Xi ˙ = N 2 Var Í Â Xi ˙ = Â Var ( Xi )
ÎÍ N i =1 ˚˙ ÎÍi = 1 ˚˙ N i =1
1 s X2
= ( N s X2 ) = (4.98)
N2 N
Since the sample mean is the sum of random variables, the CLT says that it tends to be asymptotically
normal regardless of the distribution of the random variable Xi, i = 1, 2, º, N. If we define the standard
normal score of the sample mean
X - mX Ê x - mX ˆ
Z= . Then FX ( x ) = P( X < x ) = f Á ((4.99)
sX / N Ë s X / N ˜¯
Let us consider X1, X2, …, Xk be a sequence of independent and identically distributed uniform random
variables. Consider the sum Z = X1 + X2. From Solved Problem (4.63), we can find that the pdf of Z has a
triangular pdf as shown in Fig. (4.30). In the same problem, we observed that the pdf of Y = X1 + X2 + X3 has
a pdf with parabolic shape which is quite close to Gaussian pdf. Figure (4.31) shows how fast the sum of n
independent random variables tends to be a Gaussian random variables.
0.8 0.8
fx(x) fx(x)
0.6 0.6
0.4 0.4
0.2 0.2
0 0
–2 –1 0 1 x 2 –2 –1 0 1 x 2
(a) n = 1 (b) n = 2
1 1
fx(x) fx(x)
0.8 0.8
0.6 0.6
0.4 0.4
0.2 0.2
0 0
–2 –1 0 1 x 2 –2 –1 0 1 x 2
(c) n = 3 (d) n = 5
Fig. 4.31
Multiple Random Variables 4.97
REVIEW QUESTIONS
17. State and prove the central limit theorem.
18. State the central limit theorem for equal and unequal distributions.
Solved Problems
4.67 Ten dice are thrown. Find the approximate probability that the sum obtained is between 40 and
50.
Solution Let Xi be the event of throwing a dice. Then we can define the sum as a random variable YN
10
= Â Xi . We know that when we throw a dice, the possible outcomes are 1, 2, 3, 4, 5 and 6, each with
i =1
1
probability . Therefore, we can find
6
1 7
E(Xi) = Â xi pXi ( xi ) = (1 + 2 + 3 + 4 + 5 + 6) =
i 6 2
1 2 91
and E(Xi2) = Â xi 2 pXi ( xi ) =
6
(1 + 22 + 32 + 42 + 52 + 62 ) =
6
i
2
91 Ê 7 ˆ 35
s X2 i = E[ Xi2 ] - {E[ Xi ]}2 = -Á ˜ =
6 Ë 2¯ 12
Also E(YN) = 10 E(Xi) = 35
175
s X2 N = 10[s X2 ] =
i
6
If the sum approximates the normal random variable then the CDF of the normal random variable YN is
È y - E[YN ] ˘ Ê y - 35 ˆ
P(YN £ y) = FYN ( y) = f Í ˙ = f ÁË ˜
Î s YN ˚ 175/6 ¯
Ê 50 - 35 ˆ Ê 40 - 35 ˆ
= fÁ -fÁ
Ë 175/6 ˜¯ Ë 175/6 ˜¯
= f (2.78) – f (0.93)
= 0.9973 – 0.8238 = 0.1735
4.68 A distribution with unknown mean m has variance 1.5. Using CLT, find how large a sample should
be taken from the distribution in order that the probability will be at least 0.95 and the probability sample
mean will be within 0.5 of the population mean.
Solution Let X be the sample mean of a sample size of n. Given, E(Xi) = m and s2 = 1.5
4.98 Probability Theory and Random Processes
The variance of the sample mean is s2/N and the mean of X = m. That is, X ~ N ( m , s 2 /N )
X-m
Let Z =
s/ N
Ê |X - m| 0.5 ˆ Ê - 0.5 0.5 ˆ
PÁ £ ˜ = P ÁË £Z£ ˜ ≥ 0.95
Ë s/ N s/ N ¯ s/ N s/ N ¯
Ê 0.5 N ˆ
fi 2f Á ˜ - 1 = 0.95
Ë 1.5 ¯
Ê 0.5 N ˆ
from which fÁ ˜ = 0.975
Ë 1.5 ¯
From table (2.4) we can find
0.5 N (1.96)2 (1.5)
= 1.96 fi N = = 4.44
1.5 0.25
fi N 5
4.69 The lifetime of a certain brand of an electric bulb may be considered a random variable with mean
1200 and SD 250. Find the probability using CLT that the average lifetime of 60 bulbs exceeds 1250
hours.
Solution Given sample size = 60; m = 1200; s = 250. Let X denote the mean lifetime of 60 bulbs. We
know E [ X ] = m = 1200
2 2
sX2 = s = (250) = 1041.67
N 60
s X = 32.27
Solution Let Xi, i = 1, 2, º, 100 are i. i. d. random variables. Given, E(Xi) = 8 and s Xi = 4
2
Let us define
YN = X1 + X2 + X3 + º + X100
Multiple Random Variables 4.99
YN - E[YN ] YN - 800
Let WN = =
s YN 20
Solution Since error is uniformly distributed on (–0.05, 0.05), the mean value is zero and variance
[0.05 - ( - 0.05)]2
= 8.33 ¥ 10–4. Let YN be a random variable which is the sum of 1000 numbers. Let the
12
standard normal variable
YN - E[YN ] YN
Z= =
s YN s YN
Variance s2 = 400 fi s = 20
Given that the sample mean X will not differ from m by more than 4, that is | X - m | £ 4 from which we
X-m X - 60 X - 60
can write | X - 60| £ 4 . Let Z = = =
s/ N 20/ 100 2
Ê ˆ
P(|X - 60| £ 4) = P |Z | £ X - m
ÁË s / N ˜¯
Ê 4ˆ
= P Á |Z | £ ˜ = P(|Z| < 2)
Ë 2¯
= P(–2 £ Z £ 2) = 2f(2) – 1
= 2(0.9772) – 1 = 0.9544
4.73 Consider a random variable YN which is the sum of 36 independent experimental values of the
random variable X whose pdf is given by
Ï1
Ô 1£ x £ 6
fX (x) = Ì 5
ÔÓ 0 otherwise
Find the probability that YN lies in the range 110 £ YN £ 130.
sX2 = (6 - 1) = 25
2
and
12 12
The mean and variance of YN are given by
E(YN) = N E(X) = 36(3.5) = 126
Ê 25 ˆ
s Y2N = N s X2 = 36 Á ˜ = 75
Ë 12 ¯
If the sum approximates the normal random variable, the CDF of the normalized random variable of YN
becomes
Ê y - E[YN ] ˆ
P(YN £ y) = FYN ( y) = f Á ˜
Ë s YN ¯
Ê y - 126 ˆ
= fÁ ˜
Ë 75 ¯
The probability that YN lies in the range 110 £ YN £ 130 is given by
P(110 £ YN £ 130) = FY (130) - FY ((110)
N N
Multiple Random Variables 4.101
4.74 A random sample of size 25 is taken from a normal population with mean 49 and variance 1.44.
Using CLT, find the probability that the mean of this sample falls between 48.52 and 49.6.
4.75 In an elevator, a caution reacts ‘capacity 600 kg or 10 persons’. Assume a standard deviation of 15
kg for the weight of a man drawn at random from all the people who might ride the elevator and calculate
approximately his expected weight. It is given that the probability that a full load of 10 persons will be
more than 600 kg is 0.25.
Solution Given: N = 10
10
Let Xi (i = 1, 2, º, 10) denote the individual weight. The total weight is YN = Â Xi . Let E(Xi) = m and
given s Xi = 15 kg. i =1
E(YN) = 10 m and s YN = s Xi N = 15 10
YN - E[YN ] YN - 10 m
Let WN = =
s YN s Xi N
Ê 600 - m ˆ
P(YN > 600) = P Á WN > ˜ = 0.25
Ë 15 10 ¯
Ê 600 - 10 m ˆ
fi P Á WN £ ˜ = 0.75
Ë 15 10 ¯
4.102 Probability Theory and Random Processes
Practice Problems
4.25 A transistor has a lifetime T which is exponentially distributed with a = 0.5. If 100 transistors are tested, what is
the probability that 175 < T < 225 ? (Ans: 0.7888)
4.26 The lifetime of a bulb is an exponential random variable with a mean of 50 hours. If 20 lightbulbs are tested and
their lifetimes are measured, estimate the probability that the sum of the lifetimes is less than 1050 hours. (Ans: 0.5871)
Solved Problems
4.76 Two ransom variables X and Y have the following joint probability density function
Ï2 - x - y for 0 £ x £ 1 and 0 £ y £ 1
fXY (x, y) = Ì
Ó0 otherwise
Find the marginal probability density functions of X and Y.
1
1
Ê y2 ˆ
= Ú (2 - x - y) dy = ÁË 2 y - xy - ˜¯
0
2 0
3
=-x
2
The marginal density function of Y is given by
•
fY (y) = Ú f X , Y ( x, y) dx
-•
1 1 1
x2 1 3
= Ú (2 - x - y) dx = 2x -
2 0
- yx 0 =
2
-y
0 0
1 1- x
= Ú Ú e - x e - y dy dx
0 0
1 1
-x È -y
1- x ˘
-x - (1 - x )
= Úe ÍÎ( - e ) 0 ˙˚ dx = Ú e [1 - e ] dx
0 0
1
1
= Ú (e - x - e -1 ) dx = - e - x 0
- e -1 x|10
0
= 1 - 2e -1
xy
= 0 £ x £ a, 0£y£b
ab
FX, Y (0, 0) = 0
If x ≥ a and 0 £ y £ b,
a Ê yˆ y
FX, Y (x, y) = Á ˜=
a Ë b¯ b
x Ê 6ˆ x
If 0 £ x £ a and y ≥ b then, FX,Y (x, y) = =
a ÁË b ˜¯ a
If x ≥ a and y ≥ b, then
ab
FX, Y (x, y) = =1
ab
4.104 Probability Theory and Random Processes
Fig. 4.32
The sketch of FX, Y (x, y) is shown in Fig. 4.32 and the distribution function is
Ï0 for x < 0, y<0
Ô xy
Ô 0 < x < a and 0 < y < b
Ô ab
FX, Y (x, y) = ÔÌ x
Ôa 0 < x < a and y≥b
Ô
Ô y /b 0< y<b and x≥a
ÔÓ1 x≥a and y≥b
3a
(b) If a < b and X + Y £
4
Y
a x
a
X + Y = 3—
4
Fig. 4.33
Ê 3a ˆ Ê 3a ˆ
The intervals of integration for x and y are Á 0, ˜ and Á 0, - x ˜ respectively.
Ë 4¯ Ë 4 ¯
3a
-x
3 a /4 4
È 3a ˘ 1
P ÍX + Y £ ˙ =
Î 4˚ Ú Ú ab
dy dx
0 0
Multiple Random Variables 4.105
3a
-x
3 a /4 4
= 1
Ú Ú ab
dy dx
0 0
3 a /4
È 3a 3a ˘
= 1 Ê 3a ˆ 1 Í 3a 4 x2 4 ˙
ab Ú ÁË
4
- x ˜ dx =
¯ Í x -
ab Î 4 0 2 0
˙
˚
0
È 2 2˘
= 1 Í 9a - 9a ˙ = 18a - 9a
2 2
ab Î 16 32 ˚ 32 ab
9a 2 9a
= =
32 ab 32b
Ïk (6 - x - y) 0 £ x £ 2, 0 £ y £ 2
4.79 The joint pdf fX, Y (x, y) is given by f X, Y (x, y) = Ì
Ó0 otherwise
Find FX, Y (x, y) and fX, Y (x, y) for different values of X and Y.
Also find P(0.5 £ X £ 1.5, 1 £ Y £ 1.5).
Solution
Ïk (6 - x - y) 0 £ x £ 2, 0 £ y £ 2
Given: fX, Y (x, y) = Ì
Ó0 otherwise
• •
Ú Ú f X , Y ( x, y) dx dy = 1
-• -•
2 2
fi Ú Ú k (6 - x - y) dx dy = 1
0 0
2 È 2 ˘
x2
k Ú Í6 x 0 - - y x 0 ˙ dy = 1
2 2
Í
0 Î
2 0 ˚˙
2
k Ú [10 - 2 y] dy = 1 fi k (20 – 4) = 1
0
1
from which we can obtain k =
16
In the region – V, x < 0, y < 0
4.106 Probability Theory and Random Processes
Fig. 4.34
x y
1
FX, Y (x, y) =
16 Ú Ú (6 - u - v) dv du
0 0
x È 2
y˘
1 Í6 v y - uv x - v ˙ du
FX, Y (x, y) =
16 Ú ÍÎ 0 0 2 0˙
˚
0
1
x
Ê y2 ˆ
=
16 ÚËÁ 6 y - uy -
2¯
˜ du
0
È x x˘
1 Í u2 y2
= 6 y u|0x - y - u ˙
16 ÍÎ 2 0 2 0 ˙˚
1 È x y xy 2 ˘
2
= Í6 xy - - ˙
16 Î 2 2 ˚
Region II, 0 £ x £ 2; y > 2
We get marginal distribution function of x.
1 È x2 x ˘
FX (x) = FX, Y (x, 2) = Í6 x (2) - (2) - (2)2 ˙
16 Î 2 2 ˚
1
= (10 x - x 2 ) 0 £ x £ 2
16
Region IV 0 £ y £ 2; x > 2
FY (y) = FX,Y (2, y)
Multiple Random Variables 4.107
1 È y2 y ˘
= Í6 y (2) - (2) - (2)2 ˙
16 Î 2 2 ˚
1
= (10 y - y 2 ) 0£y£2
16
Region III x > 2, y > 2
FX, Y (x, y) = 1
The joint density functions can be obtained by differentiating the distribution function. That is,
fX, Y (x, y) = 0 x £ 1 or y £ 1
1
fX, Y (x, y) = (6 - x - y), 0 £ x £ 2, 0 £ y £ 2
16
d È1 ˘ 1
fX, Y (x, y) = (10 x - x 2 )˙ = (10 - 2 x ) ; 0 £ x £ 2, y > 2
dx ÍÎ 16 ˚ 16
d È1 ˘ 1
(10 y - y 2 )˙ = (10 - 2 y) ; x > 2; 0 £ y £ 2
dy ÍÎ 16
fX, Y (x, y) =
˚ 16
fX, Y (x, y) = 0 for x > 2, y > 2
P(0.5 £ X £ 1.5, 1 £ Y £ 1.5)
1.5 1.5 1.5 Ê 1.5 ˆ
1 1 1.5 1.5 y2
=
Ú Ú 16
(6 - x - y) dx dy =
16 Ú Á 6 x - xy -
Ë 0.5 0.5 2
˜ dy
0.5 ¯
1 0.5 1
1.5 È 1.5 ˘
1 1 Í 1.5 y 2 ˙
=
16 Ú (5 - y) dy = 5y -
16 ÍÎ 1 2 1 ˚˙
1
1 1
= [5(0.5) - ((1.5)2 - 1)]
16 2
1
= [2.5 - 0.625] = 0.1172
16
or
P(0.5 £ X £ 1.5, 1 £ Y £ 1.5)
= FX, Y (1.5, 1.5) + FX, Y (0.5, 1) – FX, Y (1.5, 1) – FXY (0.5, 1.5)
1
= (10.125 + 2.625 - 7.125 - 3.75) = 0.1172
16
Ï 1
Ô -a
e - ( x + y ) , 0 < x < a, 0 < y < •
4.80 Find the joint distribution function if fX, Y (x, y) = Ì 1 - e
Ô0
Ó elsewhere
Solution
1
Given: fX, Y (x, y) = e - ( x + y ) 0 < x < a, 0 < y < •
1 - e- a
FX, Y (x, y) = 0 for x < 0, y < 0
x y
1
FX, Y (x, y) = Ú Ú 1 - e- a e - (u + v ) dv du
0 0
x y
1 -u
= ÚÚe e - v dv du
1 - e- a 0 0
1 È -u x ˘ È - v y ˘
-a ÍÎ- e 0 ˙˚ ÍÎ- e 0 ˙˚
= 1- e
(1 - e - x ) (1 - e - y )
= ; 0 < x < a, 0 < y < •
1 - e- a
(1 - e - a ) (1 - e - y )
FXY (a, y) = = (1 - e - y )
1 - e- a
fi FX, Y (x, y) = 1 – e–y for x ≥ a and 0 £ y £ •
Solution
1 2
+ y2 /2s 2 )
Given: fX, Y (x, y) = 2
e- ( x y x2 + y2 = a2
2ps
P(X2 + Y2 £ a2) = ?
x
The region under consideration is a circle of radius a as shown in Fig. 4.35.
Therefore, we use polar coordinates to solve the above problem.
Let x = r cos q; y = r sin q. Fig. 4.35
Then
dx dy = rdrdq
Also, q varies from 0 to 2p, and r varies from 0 to a
a 2p
1 Let r 2 /2s 2 = p fi rd r = s 2 dp
- r 2 /2s 2
2 2
P(X + Y £ a ) = 2
Ú Ú 2ps 2
e rdrdq
When r = 0; p = 0; r = a; p = a 2 / 2s 2
r=0 q =0
a 2 /2s 2 2p
1
= Ú Ú e - ps 2 dp dq
2ps 2 0 0
1
2p Ê a2 /2s 2 ˆ
= Ú Á Ú e - p s 2 dp˜ dq
2ps 2 ÁË ˜¯
0 0
1
2p Ê a2 /2s 2 ˆ
= Ú Á Ú e - p dp˜ dq
2p ÁË ˜¯
0 0
2p
1 Ï - p a2 /2s 2 ¸
=
2p Ú Ì( -e ) 0
Ó
˝ dq
˛
0
2p
1 2
/2s 2
= Ú (1 - e - a ) dq
2p 0
2
= 1 - e- a /2s 2
Practice Problem
4.27 Find whether the random variables X and Y given in Solved Problem (4.80) are independent or not.
(Ans: Independent)
Solved Problems
4.82 In tossing-a-die experiment, the random variable X denotes the number of full pairs and the
random variable Y represents the remaining dots. Find the joint probability distribution and pmf. Sketch
the distribution function.
4.110 Probability Theory and Random Processes
Solution The outcomes of the event and their probabilities are shown below
S 1 2 3 4 5 6
X 0 1 1 2 2 3
Y 1 0 1 0 1 0
1 1 1 1 1 1
P(X, Y) — — — — — —
6 6 6 6 6 6
1/6 1/ 2 5/6
1.00
1
0 1/6
2/6
1
3/6
2
Fig. 4.36
1
P(X = 0) =
6
1 1 1
P(X = 1) = + =
6 6 3
Multiple Random Variables 4.111
1 1 1
P(X = 2) = + =
6 6 3
1
P(X = 3) =
6
1 1 1 1
P(Y = 0) = + = =
6 6 6 2
1 1 1 1
P(Y = 1) = + = =
6 6 6 2
4.83 Find the marginal pmf for the pairs of random variables with the indicated joint pmf
Y
X –1 0 1
1 1
–1 — — 0
6 6
1
0 0 0 —
3
1 1
1 — — 0
6 6
1 1
P(X = –Y) = P(X = 0, Y = 0) + P(X = –1, Y = 1) + P(X = 1, Y = –1) = 0 + 0 + = .
6 6
4.84 If the joint pdf of a two-dimensional random variable (X, Y) is given by
Ïk (6 - x - y), 0 < x < 2, 2 < y < 4
fX, Y (x, y) = Ì
Ó0 otherwise
Find (a) the value of k, (b) P(X < 1, Y < 3), (c) P(X + Y < 3), and (d) P(X < 1/Y < 3).
Solution
• •
We know Ú Ú f X ,Y ( x, y) dx dy = 1
-• -•
4 2
fi Ú Ú k (6 - x - y) dx dy = 1
2 0
È4 2˘
x2
k Í Ú (6 x - - yx ) ˙ dy = 1
Í2 2 0˙
Î ˚
È4 ˘
k Í Ú (12 - 2 - 2 y) dy ˙ = 1
ÍÎ 2 ˙˚
4
k Ú (10 - 2 y) dy = 1
2
4
k [10 (4 - 2) - y 2 2 ] = 1
k(20 – 12) = 1
1
k=
8
1 3
1 Ï 3¸
1 Ô 3 3 y2 Ô
=
8 Ú 2
Ì6 y - xy 2
-
2
˝ dx
Ô
0 Ó 2Ô
˛
1
1 È 5˘
=
8 Ú ÍÎ6 - x - 2 ˙˚ dx
0
Multiple Random Variables 4.113
1 Ê 1 1ˆ
1 Ê7 ˆ 1 7 x2
=
8 Ú ÁË 2 ˜¯
- x dx = Á x
8 Ë2 0
-
2 0¯
˜
0
1 È7 1˘ 3
= - =
8 ÍÎ 2 2 ˙˚ 8
3
0<x<2
2<y<4
2
0 1 3
Fig. 4.37
3 3- y
P(X + Y < 3) = Ú Ú f X ,Y ( x, y) dx dy (See Fig. 4.37 to find the interval of integration)
2 0
3 3- y
1
= Ú Ú 8
(6 - x - y) dx dy
2 0
3 Ï 2
3- y ¸
1 Ô 3- y x Ô
= Ú 0
Ì6 x | - - yx|30 - y ˝ dy
8 Ô
2 Ó
2 0 Ô˛
3
1 Ï 1 ¸
Ú ÌÓ6 (3 - y) - 2 (3 - y) - y (3 - y)˝ dy
= 2
8 2 ˛
3
1 Ï 1 ¸
Ú ÌÓ18 - 6 y - 2 (9 + y - 6 y) - 3 y + y 2 ˝ dy
2
=
8 2 ˛
ÔÏ Ô¸
3
1 9 y2
= Ú ÌÔ18 - 6 y - 2 - + 3 y - 3 y + y 2 ˝ dy
8 2 Ó 2 ˛Ô
Ê 3
3
3ˆ
1
3
Ê y2 27 ˆ 1 Á y3 27 ˜
Ú ÁË 2 - + = - 3y +
= 2
6 y ˜ dy y
8 2
2¯ 8 ÁË 6 2 2
2 2 ˜¯
1 Ê 19 27 ˆ 1 Ê 19 - 90 + 81ˆ 5
= Á - 15 + ˜¯ = ÁË ˜¯ =
8Ë 6 2 8 6 24
4.114 Probability Theory and Random Processes
P( X < 1, Y < 3)
(iv) P(X < 1|Y < 3) =
P(Y < 3)
3
P(Y < 3) = Ú fY ( y) dy
2
•
fY (y) = Ú f X ,Y ( x, y) dx
-•
2 È 2 ˘
1 1 Í 2 x2
=
Ú 8 (6 - x - y ) dx = 6x0 - - yx|20 ˙
0
8 ÍÎ 2 0 ˚˙
1 (5 - y)
= (12 - 2 - 2 y) =
8 4
3 3 3
5- y 1
P(Y < 3) = Ú fY ( y) dy = Ú
4
dy =
4 Ú (5 - y) dy
2 2 2
È 3˘
1 Í 3 y2 ˙
= 5 y|2 -
4 ÍÎ 2 2 ˚˙
1 È 5˘ 5
=
4 ÍÎ5 - 2 ˙˚ = 8
P( X < 1, Y < 3)
P(X < 1|Y < 3) =
P(Y < 3)
3/8 3
= =
5/8 5
4.85 If the joint density function of two random variables X and Y are
fX, Y (x, y) = e–(x + y), x ≥ 0, y ≥ 0
= 0 otherwise
Find (a) P(X < 1) (b) P(X + Y < 1).
• •
fX (x) = Ú e - ( x + y ) dy = e - x Úe
-y
dy
0 0
•
= e - x (e - y ) 0 = e - x x ≥ 0
Multiple Random Variables 4.115
fX ( x) = e- x , x ≥ 0
1 1
1
-x
P(X < 1) = Ú f X ( x) dx = Ú e dx = - e - x 0
0 0
–1
= 1 – e = 0.632
X+Y=1
0 1
Fig. 4.38
1 È1 - y ˘
-y -x
= Ú ÍÍ Ú e dx ˙˙ dy
e
0 Î 0 ˚
1
-y
= Úe (1 - e - (1 - y ) ) dy
0
1
-y
= Ú (e - e -1 ) dy
0
1 1
= - e- y 0 - e -1 y 0
= (1 – e–1) – e–1 = 0.264
4.86 The joint probability density function of two random variables X and Y is given by
ÏÔa (2 x + y 2 ), 0 £ x £ 2, 2 £ y £ 4
fX,Y (x, y) = Ì
ÔÓ0 elsewhere
Find (a) the value of a, and (b) P(x < 1, y > 3).
Solution
Ïa (2 x + y 2 ), 0 £ x £ 2, 2 £ y £ 4
Given: fX,Y (x, y) = ÔÌ
ÔÓ0 otherwise
4.116 Probability Theory and Random Processes
We have
• •
Ú Ú f X , Y ( x, y) dx dy = 1
-• -•
4 2
Ú Ú a (2 x + y ) dx dy = 1
2
fi
2 0
4
È 2 2˘
a Ú ÍÎ x 2 0 + y 2 x 0 ˙˚ dy = 1
2
4
a Ú (4 + 2 y 2 ) dy = 1
2
Ï 4¸
Ô y3 Ô
a Ì4 y|24 + 2 ˝ =1
ÔÓ 3 2Ô
˛
Ï 2 ¸
a Ì4(4 - 2) + (43 - 23 )˝ = 1
Ó 3 ˛
Ï 2 ¸
a Ì8 + (56)˝ = 1
Ó 3 ˛
3
fi a=
136
1 4
3
P(X £ 1, Y > 3) = Ú Ú 136 (2 x + y
2
) dy dx
0 3
Ï1 È 4˘ ¸
3 Ô Í 4 y3 ˙ dx Ô˝
= ÌÚ 2 xy 3 +
136 Ô 0 ÍÎ 3 3˙
˚ Ô
Ó ˛
3 È1 Ê 37 ˆ ˘
= Í Ú Á 2 x + ˜ dx ˙
136 ÍÎ 0 Ë 3 ¯ ˙˚
3 È 2 1 37 ˘
1
= Íx 0 + x ˙
136 ÎÍ 3 0 ˚˙
3 Ê 40 ˆ 5
= Á ˜=
136 Ë 3 ¯ 17
Multiple Random Variables 4.117
4.87 Let X and Y be jointly continuous random variables with joint density function
ÏÔ xy e - ( x 2 + y2 )/2 x > 0, y > 0
fX, Y (x, y) = Ì
ÔÓ0 otherwise
Check whether X and Y are independent.
Find (a) P(X £ 1, Y £ 1), and (b) P(X + Y £ 1).
Solution
ÏÔ xy e - ( x 2 + y2 )/2 x > 0, y > 0
Given fX, Y (x, y) = Ì
ÔÓ0 otherwise
The marginal pdf of X is
•
fX (x) = Ú f X ,Y ( x, y) dy
-•
•
- ( x 2 + y2 )/2
= Ú xy e dy
0
•
2
-x - y2 /2
Úye
/2
= xe dy
0
•
- y2 /2
Úye dy = ?
0
y2
Let =t fi y dy = dt
2
• •
2 •
-y -t
fi Úye
/2
dy = Úe dt = - e - t 0 =1
0 0
•
- y2 /2
fi Úye dy = 1
0
2
fi fX (x) = x e - x /2
•
2
- x 2 /2 2
= y e- y /2
Ú xe dx = ye - y /2
Solution Let the two resistors be R1 and R2. The net resistance is equal to R = R1 + R2.
R1 and R2 are uniformly distributed with 10% tolerance. That is,
f R1 (r1 ) = 0.5 for 9 < r1 < 11
f R2 (r2 ) = 0.5 for 9 < r2 < 11
P(R1 + R2 £ 21) = 1 – P(R1 + R2 > 21)
Fig. 4.39
Multiple Random Variables 4.119
11
= 0.25 Ú (11 - 21 + r1 ) dr1
10
11
11 È r2 ˘
= 0.25
Ú (r1 - 10) dr1 = 0.25 Í 1 - 10r1 ˙
Î2 ˚10
10
È 21 ˘
= 0.25 Í - 10 ˙ = 0.125
Î2 ˚
P(R1 + R2 £ 21) = 1 – 0.125 = 0.875
4.89 In an experiment of tossing a fair coin three times, (X, Y) is a random variable, where X denotes
number of heads on the first two tosses and Y denotes the number of heads on the third toss.
(a) Find the range of X.
(b) Find the range of Y.
(c) Find the range of (X, Y).
(d) Find (i) P(X £ 2, Y £ 1) (ii) P(X £ 1, Y £ 1) and (iii) P(X £ 0, Y £ 0).
Solution The sample space of the experiment is S = {HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
(a) Since X denotes the number of heads in the first two tosses, the range of X is
RX = {0, 1, 2}
(b) Since Y denotes number of heads on the third toss,
RY = {0, 1}
(c) The range of (X, Y) is {(0, 0), (0, 1), (1, 0), (1, 1), (2, 0), (2, 1)}
(d) (i) P(X £ 2, Y £ 1) = ?
From the sample space, we can find each and every event satisfying the joint event (X £ 2, Y £ 1).
Therefore, P(X £ 2, Y £ 1) = 1.
(ii) P(X £ 1, Y £ 1)
The events that satisfy the above joint event are
(HTH, HTT, THH, THT, TTH, TTT)
6 3
Therefore, P(X £ 1, Y £ 1) = =
8 4
(iii) P(X £ 0, Y £ 0)
The event that satisfies the above joint event is (TTT)
1
Therefore, P(X £ 0, Y £ 0) =
8
4.120 Probability Theory and Random Processes
Solution Given:
-y 2
/4 0 £ x £1
fX, Y (x, y) = x ye
y≥0
The marginal pdf is given by
• y2
- y2 /4 let =t
fX (x) = Ú x ye dy 4
0 ydy = 2 dt
• •
2 •
fX (x) = x Ú ye - y /4
dy = x Ú 2e - t dt = - 2 x e - t 0 = 2x
0 0
1
2
fY (y) = Ú x ye - y /4
dx
0
1 1
2 2 x2 y - y2 /4
= ye - y /4
Ú x dx = ye - y /4
= e
0
2 0 2
2
= fX,Y (x, y) = xy e - y /4
1 1 1
PX(–r) = ; P (0) = ; PX (r ) =
4 X 2 4
1 1 1
PY (–r) = ; P (0) = ; PY (r ) =
4 Y 2 4
1 3
P(X = 0) = ; P( X < r ) =
2 4
REVIEW QUESTIONS
19. Describe vector random variables.
20. Define and explain joint distribution function and joint density function of two random variables X
and Y.
21. Distinguish between joint distribution and marginal distribution functions.
22. Explain the properties of joint distribution function.
23. Define joint probability mass function.
24. Define marginal pmf.
25. Write a note on joint probability matrix.
26. Define conditional distribution and density function of two random variables X and Y.
27. Explain the properties of conditional density function.
28. Define pmf of N random variables.
29. What is the condition for statistical independence of random variables X and Y?
30. What is the probability distribution function of sum of two random variables?
31. Explain the method of finding the distribution and density functions for a sum of statistically
independent random variables.
32. Find the pdf of sum of several random variables.
33. State and prove the central limit theorem.
EXERCISES
Problems
1. Two jointly exponential random variables X and Y have the joint distribution function
Ï -4 x
- e -3 y + e - (4 x + 3 y )
FX,Y(x, y) = ÔÌ1 - e x > 0, y ≥ 0
ÔÓ otherwise
(a) Find fX(x) and fY(y)
(b) Find P(X £ 0.5, Y £ 1).
4.122 Probability Theory and Random Processes
ÏÔ4e -4 x x ≥ 0 Ï3e -3 y ; y ≥ 0
(Ans. (a) fX(x) = Ì , fY(y) = ÔÌ ; (b) 0.8216)
ÔÓ 0 x<0 ÔÓ 0 y<0
2. The joint pmf of random variables X and Y is given by the matrix
Y
X 1 2 3
1 1 Ê 5 1 3ˆ
1 — — 0 Ans : PX ( x = 1) = ; PX ( x = 2) = PX ( x = 3) =
4 8 Á 12 4 8˜
1
2 — 0
1
— Á ˜
Á 1 1 1
8 8 PY ( y = 1) = ; PY ( y = 2) = ; PY ( y = 3) = ˜
1 1 1 Ë 2 4 4 ¯
3 — — —
8 8 8
Find marginal pmfs of X and Y.
3. Two random variables X and Y have the joint pdf
ÏÔ2 xy 0 £ x £ 1, 0 £ y £ 2
fX,Y (x, y) = Ì
ÔÓ 0 elsewhere
Find (a) P(X £ Y); (b) P(X ≥ Y)
4. Find fX(x) and fY(y) if
Ï 1 -2 y Ê 1 ˆ
Ô e 0 £ x £ 4, y > 0 Ans : f X ( x ) = 0 £ x £ 4
Á 4 ˜
fX,Y (x, y) = Ì 2 Á ˜
ÔÓ 0 elsewhere Ë
-2 y
fY ( y) = 2e , y ≥ 0¯
5. If the joint pdf of X and Y is
ÔÏe - ( x + y ) x > 0, y > 0
fX,Y (x, y) = Ì
ÔÓ 0 otherwise
Check whether X and Y are independent. (Ans: Independent)
6. The joint pdf of random variables X and Y is given by
Ï8 xy / 9, 1 £ x £ y £ 2
fX,Y (x, y) = Ì Ê 2y ˆ
Ó 0 otherwise Á Ans : , x £ y £ 2˜
Ë 4 - x 2
¯
Find the conditional density functions of Y given X = x.
7. If X1, X2, X3 …Xn are uniform random variables with means = 2.5 and variance = 3/4, use CLT to
estimate P(108 £ Sn £ 126), where Sn = X1 + X2 + X3 + … + Xn, n = 48. (Ans: 0.9275)
8. X and Y are independent with a common pdf
ÏÔe - x ; x ≥ 0 ÏÔe - y ; y ≥ 0 Ê ÏÔ ze - z for z = 0 ˆ
fX(x) = Ì and fY ( y) = Ì Á Ans : fZ ( z ) = Ì ˜
ÓÔ 0 ; x < 0 ÓÔ0 ; y < 0 Ë ÔÓ 0 elsewhere¯
Find the pdf for Z = X + Y.
9. The joint pdf of a bivariate random variable (X, Y) is given as
Ï kx
Ô ; 1 < x < 2, 1 < y < 2 Ê 2 ˆ
fX,Y (x, y) = Ì y ÁË Ans : 3 ln 2 , Independent ˜¯
Ô0
Ó otherwise
where k is a constant.
Multiple Random Variables 4.123
(a) Determine k.
(b) Are X and Y independent?
10. Two fair dice are rolled. Find the joint pmf of X and Y when X is the smallest and Y is the largest
value obtained on the dice.
=0 otherwise
Check whether X and Y are independent. Find (a) P(X £ 1, Y £ 1) and (X + Y £ 1)
28. If fX,Y (X, Y) = 0.5 exp (–|X| – |Y|), where X and y are two random variables, if Z = X + Y, find fz(Z).
29. If two random variables have the joint probability density
Multiple Random Variables 4.125
Ï2
Ô ( x + 2 x2 ) for 0 < x1 < 1, 0 < x2 < 1
f (x1, x2) = Ì 3 1
Ô0 elsewhere
Ó
Find
(a) the marginal density of x2
(b) conditional density of the first given that the second takes on the value x2.
30. Let X and Y be jointly continuous random variables with joint density function
f ( x, y) = xy exp[ -( x 2 + y 2 )] ; x > 0, y > 0
=0 otherwise
Check whether X and Y are independent. Find
(a) P(X £ 1, Y < 1) and
(b) P(X + Y < 1)
Ï -2 x cos ( y / 2) 0 £ x £ 1 ; 0 £ y £ p Ê 1 ˆ
31. If the function f (x, y) = ÔÌbe , Á Ans : b = ˜
ÔÓ 0 elsewhere Ë 1 - e -2 ¯
where ‘b’ is a positive constant, is a valid joint probability density function, find b.
32. The joint probability density function of two random variables X and Y is given by f (x, y) =
ÏC (2 x + y) 0 £ x £ 1, 0 £ y £ 2 .
Ì
Ó 0 elsewhere
Find (a) the value of C
Ê Ï 1 Ï1 ˆ
Á Ans ; C = 1 ; f ( x ) = ÔÌ x + 2 for 0 £ x £ 1 f ( y) = ÔÌ 4 (1 + y) for 0 £ y £ 2˜
Á 4 X ÔÓ
Y
ÔÓ ˜
Ë 0 otherwise 0 elsewhere ¯
(b) marginal distribution functions of X and Y.
33. The joint probability density function of two random variables X and Y is given by
Ï
f(x, y) = ÔÌa(2 x + y ) 0 £ x £ 2, 2 £ y £ 4
2
ÔÓ 0 elsewhere
Find (a) value of a
(b) P(X £ 1, Y > 3).
34. Discrete random variables X and Y have a joint distribution function
FX, Y(x, y) = 0.1u(x + 4)u(y – 1) + 0.15u(x + 3)u(y + 5) + 0.17u(x + 1)u(y – 3)
+ 0.05u(x)u(y – 1) + 0.18u(x – 2)u(y + 2) + 0.23u(x – 3)u(y – 4)
+ 0.12u(x – 4)u(y + 3).
Find
(a) sketch FX,Y(x, y)
(b) marginal distribution functions of X and Y
(c) P(–1 < X £ 4, –3 < Y £ 3)
(d) P(X < 1, Y £ 2).
ÔÏ( x 2 + y 2 ) / 8p x 2 + y 2 < b
35. Given the function f(x, y) = Ì
ÔÓ 0 elsewhere
4.126 Probability Theory and Random Processes
(a) Find the constant b so that this is a valid joint density function.
(b) Find P(0.5b < X2 + Y2 < 0.8b). (Ans: b = 4, 0.39)
36. A random vector (X, Y) has pdf given by
fX, Y(x, y) = k
in the region given in Fig. 4.40.
Fig. 4.40
Find k, fX(x), fY(y) [Ans. 2; 2(1 – x); 0 < x < 1, 2(1 – y); 0 < y < 1]
37. In a communication system, two signals (X, Y) are transmitted using the reaction
X = r cos (2p q/6); Y = Y sin (2p q/6)
where q = {0, 1, 2, …, 5}
(a) Find the joint pmf of X and Y
(b) Find the marginal pmf of X and of Y.
Multiple-Choice Questions
Ï1
(6 - x - y) ; 0 < x < 2, 2 < y < 4
1. If fX,Y (x, y) = ÔÌ 8
ÔÓ 0 elsewhere
What is the value of P(X + Y < 3)?
5 5 5 5
(a) (b) (c) (d)
12 24 6 36
Ï 1 - ( x - y), 0 < x < 2, 0 < y < 2
2. The joint pdf of (X, Y) is fX,Y (x, y) = Ì
Ó 0 elsewhere
1 1 1 1
(a) (b) (c) (d)
4 2 8 16
3. The joint pdf of (X, Y) is
Ï 0 £ x £ 0.5 and 0 £ y £ 2
fX,Y(x, y) = bxy Ì
Ó 0 elsewhere
The value of b for valid pdf is
(a) 12 (b) 1 (c) 9 (d) 4
4. P{x1 < X £ x2, y1 < Y £ y2} =
(a) F(x2, y2) + F(x1, y1) – F(x1, y2) – (F(x2, y1)
(b) F(x2, y2) – F(x1, y1) + F(x1, y2) – (F(x2, y1)
Multiple Random Variables 4.127
Ï 2x 0 < x £1
(a) fX (x) = Ï 2x 0£ x£2 (b) fX(x) = Ì
Ì Ó 0 elsewhere
Ó 0 elsewhere
ÔÏ x 0< x£ 2 Ï x /2 0£ x£4
(c) fX (x) = Ì (d) fX(x) = Ì
ÔÓ 0 elsewhere Ó 0 elsewhere
7. The joint density function of X and Y is given by
ÔÏ 2 e - x e -2 y 0 < x < •, 0 < y < •
fX,Y (x, y) = Ì
ÔÓ 0 otherwise
The value of P(X < Y) is
2 1 1 1
(a) (b) (c) (d)
3 9 6 3
8. The random variables X, Y and Z are uniformly distributed over (0, 1), the value of P(X ≥ YZ)
1 3 34 5
(a) (b) (c) (d)
4 4 4 4
9. X and Y have the joint pdf given by
Y
X 0 1 2
1 0.2 0.2 0
2 2 1
(a) 1 - x2 ; 0 £ x £ 1 (b) 1 - x2 ; 0 £ x £
p p2 2
4.128 Probability Theory and Random Processes
2 2
(c) x 2 - 1; 0 £ x £ 1 (d) 1 + x2 0 £ x £ 1
p p
ÔÏ2 e - ( x + y ) , x ≥ 0, y ≥ x
11. If fX,Y (x, y) = Ì
ÔÓ 0 elsewhere
The value of P(Y < 2X) is
1 1 1 1
(a) (b) (c) (d)
4 2 3 5
12. If the joint pdf of (X, Y) is
Ï1
( x + y) , 0 < x < 3, x < y < x + 2
fX,Y(x, y) = ÔÌ 24
ÔÓ 0 elsewhere
The value of P(Y > 2) is
5 1 7 5
(a) (b) (c) (d)
6 9 18 9
13. If fX(x) and fY(y) are pdfs of two random variables, X and Y then the pdf of Z = X + Y is
• •
(a) fZ(z) = Ú fY ( y) f X ( z - y) dy (b) fZ(z) Ú f X ( x ) fY ( z - x ) dx
-• -•
• •
(c) fZ(z) Ú fY ( y) f X ( z - y) dy (d) Z(z) Ú f X ( x ) fY ( x - y) dx
-• -•
14. The pdfs of X and Y are shown in Fig. 4.41.
Fig. 4.41
The pdfs of z = X + Y is
(a) (b)
(c) (d)
Multiple Random Variables 4.129
• •
(c) Ú fY ( x ) f X ( x ) dx (d) Ú fY ( y) fY ( y) dy
-• -•
INTRODUCTION 5.1
In Chapter 3, we studied the computation of expectation and variance for a single random variable. In Chapter
4, we studied the joint occurrence of two random variables and their distribution and density functions. Now
we extend the concept of expectation and variance to two random variables. We will also study concepts like
covariance, correlation, correlation coefficient and their significance.
EXPECTATION OF A FUNCTION OF
RANDOM VARIABLES 5.2
Let us consider two random variables X and Y with joint density function fX,Y (x, y). Consider g(X,Y) which is
some function of random variables X and Y. Then the expected value of g(X,Y) is given by
• •
E[ g( X , Y )] = Ú Ú g( x, y) f X ,Y ( x, y) dx dy (5.1)
- • -•
For discrete random variables, the equivalent expression interms of the joint pmf is
E[ g( X , Y )] = ÂÂ g( xm , yn ) pX ,Y ( xm , yn ) (5.2)
m n
If g(X1, Y2, …, Xn) is some function of N random variables X1, X2, …, XN then the expected value of the
function is equal to
• •
E[g(X1, X2, …, Xn)] = Ú Ú g( X1 , X2 , , X n ) fx1, fx2, …, Xn(x1, x2, …, xn)dx1 dx2 … dxn (5.3)
-• -•
Consider a linear function of two random variables X and Y given by g(X, Y) = aX + bY, where a and b are
constants. The expectation of g(X,Y) is
E[g(X,Y)] = E[aX + bY]
• •
= Ú Ú (aX + bY ) f X , Y ( x, y) dx dy
-• -•
5.2 Probability Theory and Random Processes
• • • •
= a Ú Ú x f X ,Y ( x, y) dx dy + b Ú Ú y f X ,Y ( x, y) dx dy
-• -• -• -•
• È• ˘ • È• ˘
= a Ú x ÍÍ Ú f X ,Y ( x, y) dy ˙ dx + b Ú y Í Ú f X , Y ( x, y) dx ˙ dy
-• Î -• ˚˙ -• ÎÍ -• ˚˙
• •
= a Ú x f X ( x ) dx + b Ú y fY ( y) dy
-• -•
= aE[X] + bE[Y] (5.4)
Solved Problems
1 Ê 1ˆ 1
=
6 ÁË 4 ˜¯ 24
P(X = –1, Y = 1) = P(X = –1) P(Y = 1) =
1 Ê 1ˆ 1
=
3 ÁË 4 ˜¯ 12
P(X = 0, Y = –1) = P(X = 0) P(Y = –1) =
Operations on Multiple Random Variables 5.3
1 Ê 1ˆ 1
=
3 ÁË 2 ˜¯ 6
P(X = 0, Y = 0) = P(X = 0) P(Y = 0) =
1 Ê 1ˆ 1
=
3 ÁË 4 ˜¯ 12
P(X = 0, Y = 1) = P(X = 0) P(Y = 1) =
1 Ê 1ˆ 1
P(X = 1, Y = –1) = P(X = 1) P(Y = –1) = =
2 ÁË 4 ˜¯ 8
1 Ê 1ˆ 1
P(X = 1, Y = 0) = P(X = 1) P(Y = 0) = =
2 ÁË 2 ˜¯ 4
1 Ê 1ˆ 1
P(X = 1, Y = 1) = P(X = 1) P(Y = 1) = =
2 ÁË 4 ˜¯ 8
The values are tabulated as shown below.
Y
X –1 0 1
1 1 1
–1 — — —
24 12 24
1 1 1
0 — — —
12 6 12
1 1 1
1 — — —
8 4 8
Ê 1ˆ Ê 1ˆ Ê 1ˆ 1 1 1
E[X] =  xi pX ( xi ) = (-1) ÁË 6 ˜¯ + 0 ÁË 3 ˜¯ + 1ÁË 2 ˜¯ = 2 - 6 = 3
i
Ê 1ˆ Ê 1ˆ Ê 1ˆ
E[Y] =  yi pY ( yi ) = (-1) ÁË 4 ˜¯ + 0 ÁË 2 ˜¯ + 1ÁË 4 ˜¯ = 0
j
1 1 1 2
E[X2] = Â xi2 pX ( xi ) = (-1)2
6
+ (0)2 + (1)2 =
3 2 3
i
Ê 1ˆ 1 Ê 1ˆ 1
E[Y2] = Â y2j pY ( y j ) = (-1)2 Á ˜ + (0)2 + (1)2 Á ˜ =
Ë 4¯ 2 Ë 4¯ 2
j
E[XY] = ÂÂ xi y j pXY ( xi , y j )
j i
Ê 1ˆ Ê 1ˆ Ê 1ˆ
= (–1) (–1) Á ˜ + (-1) (0) Á ˜ + (-1) (1) Á ˜ + 0 + 0 + 0
Ë 24 ¯ Ë 12 ¯ Ë 24 ¯
Ê 1ˆ Ê 1ˆ Ê 1ˆ
+ 1(–1) Á ˜ + 1(0) Á ˜ + 1(1) Á ˜
Ë 8¯ Ë 4¯ Ë 8¯
1 1 1 1
= - - + =0
24 24 8 8
E[2X + 3Y] = 2E[X] + 3E[Y]
Ê 1ˆ 2
= 2 Á ˜ + 3(0) =
Ë 3¯ 3
5.4 Probability Theory and Random Processes
2 1 4-3 1
E[X2 – Y2] = E[ X 2 ] - E[Y 2 ] = - = =
3 2 6 6
5.2 The joint density function of random variables X1, X2, X3 and X4 is given by
1
f X1 , X2 , X3 , X 4 ( x1 , x2 , x3 , x4 ) = 0 < x1 < a, 0 < x2 < b, 0 < x3 < c, 0 < x4 < d
abcd
0 elsewhere
Solution
Let g(X1, X2, X3, X4) = X1n X 2n X3n X 4n
• • • •
E ÈÎ X1n X 2n X3n X 4n ˘˚ = Ú Ú Ú Ú x1
n
x2n x3n x4n f X1 , X2 , X3 , X 4 ( x1 , x2 , x3 , x4 ) dx1 dx2 dx3 dx4
-• -• -• -•
d c b a n n n n
x1 1 x2 2 x3 3 x4 4
= ÚÚÚÚ abcd
dx1 dx2 dx3 dx4
0 0 0 0
Èd c b Ê n1 +1 aˆ˘
1 Í x
Á 1 ˜ ˙ x2n2 x3n3 x4n4 dx2 dx3 dx4
abcd Í Ú0 ÚÚ
=
Á n1 + 1 ˜˙
ÎÍ 0 0 Ë 0¯˚˙
b
n +1
a n1 +1
d c
x2 2
Ú Ú n2 + 1
n n
= x3 3 x4 4 dx3 dx4
(n1 + 1) abcd 0 0 0
È n3 +1 c˘
a n1 +1 b n2 +1 a n1 +1 b n2 +1c n3 +1
d d
Í x3 ˙ x n4 dx =
Ú abcd (n1 + 1)(n2 + 1)(n3 + 1) Ú0 4
n
= x 4 dx4
abcd (n1 + 1) (n2 + 1) Í n3 + 1 ˙ 4 4
0
ÎÍ ˙
0˚
a n1 +1 b n2 +1 c n3 +1 d n4 +1
=
abcd (n1 + 1) (n2 + 1) (n3 + 1) )(n4 + 1)
a n1 b n2 c n3 d n4
=
(n1 + 1) (n2 + 1) (n3 + 1) (n4 + 1)
5.3 Three statistically independent random variables X1, X2 and X3 have mean value mX1 = 3, mX2 = 6 and
mX3 = –2. Find the mean values of the following functions:
(a) g(X1, X2, X3) = X1 + 3X2 + 4X3
(b) g(X1, X2, X3) = X1X2X3
(c) g(X1, X2, X3) = –2X1X2 – 3X1X3 + 4X2 X3
(d) g(X1, X2, X3) = X1 + X2 + X3
Operations on Multiple Random Variables 5.5
Practice Problems
8
5.1 If X and Y are independent random variables with pdfs fX(x) = , x > 2 and fY(y) = 2y; 0 < y < 1, find E[XY].
x3 (Ans. 8/3)
5.2 Two statistically independent random variables X1 and X2 have mean value mX1 = 5 and mX2 = 10. Find the mean
value of the following functions:
(a) g(X1, X2) = X1 + 3X2 (b) g(X1, X2) = –X1X2 + 3X1 + X2 (Ans. (a) 35 (b) –25)
Solved Problems
5.4 Prove that the mean value of a weighted sum of random variables equal to the weighted sum of the
mean values.
Solution Consider N random variables X1, X2, …, XN.
Let g(X1, X2, …, XN) = a1 X1 + a2 X2 + …, + aNXN
E[g(X1, X2, …, XN)] = E[a1 X1 + a2X2 + … + aNXN]
ÈN ˘
= E ÍÂ ai Xi ˙
ÎÍ i =1 ˙˚
5.6 Probability Theory and Random Processes
• • N
= Ú Ú Â ai xi f X1 , X2 , , XN ( x1 , x2 , , x N ) dx1 dx2 dx N
-• -• i = 1
N • •
= ÂÚ Ú ai xi f X , X ,
1 2 , XN ( x1 , x2 , x N ) dx1 dx2 dx N
i = 1 -• -•
• •
= a1 Ú Ú x1 f X , 1 , XN ( x1 , x2 , ) dx1 , dx2 … dx N
-• -•
• •
+ a2 Ú Ú x2 f X1 , XN ( x1 , x2 x N ) dx1 dx2 dx N +
-• -•
• •
= a1 Ú x1 f X1 ( x1 ) dx1 + a2 Ú x2 f X 2
( x2 ) dx2 +
-• -•
= a1 E[X1] + a2 E[X2] + … + aN E[XN]
REVIEW QUESTIONS
1. Define the expectation of function of a random variable
2. Prove that the mean value of a weighted sum of random variables is equal to the weighted sum of
mean values.
Ú Úx
n k
mnk = E[XnYk] = y f X ,Y ( x, y) dx dy (5.5)
-• -•
If n = 1 and k = 0, mnk is equal to the mean value of the random variable X and if n = 0 and k = 1, mnk
is equal to the mean value of the random variable Y. In both cases, n + k = 1, thus there are two first order
moments. The total number of second order moments are three. They are
(i) m20 = E[X2] the mean square value of X
(ii) m02 = E[Y2] the mean square value of Y
(iii) m11 = E[XY] the correlation between X and Y.
Also, mn0 = E]Xn], the nth order moment of X, and m0k = E[Yk], the kth order moment of Y.
5.3.1 Correlation
The correlation between two random variables X and Y is defined as
• •
rXY = E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy (5.6)
-• -•
If the correlation between two random variables X and Y is zero then they are said to be orthogonal. If
E[XY] = E[X] E[Y] then the random variables are uncorrelated. The random variables that are statistically
Operations on Multiple Random Variables 5.7
independent are uncorrelated. But all the uncorrelated random variables are not statistically independent.
However, uncorrelated Gaussian random variables are also independent. If rXY = 0 for two random variables
X and Y, they are called orthogonal.
For N random variables X1, X2, …, XN, the (n1 + n2 + … + nN) order joint moments are defined by
mn1n2 … n = E È X1 1 X 2 2 X NN ˘
n n n
N Î ˚
• •
Ú Ú X1
n1 n n
= X2 2 X NN f X1 X2 , XN ( x1 , x N ) dx1 dxn (5.7)
-• -•
Ú Ú (x - mX ) ( y - mY )n f X ,Y ( x, y) dx dy
n
= (5.8)
-• -•
If n = 2 and k = 0, then
m20 = E[(X – mX)2] = sX2 (5.9)
is variance of X.
Similarly, if n = 0 and k = 2 then
m02 = E[Y – mY)2] = sY2 (5.10)
is variance of Y.
Since n + k = 2, for the above cases, m20 and m02 are known as second order moments. The other second-
order joint moment m11 is given by
• •
m11 = E[(X – mX)(Y – mY)] = Ú Ú ( x - m X ) ( y - mY ) f X ,Y dx dy (5.11)
-• -•
5.3.3 Covariance
Consider two random variables X and Y with mean values mX and mY respectively. Then the second-order
moment defined in Eq. (5.11), which is also known as covariance, is given by
Cov(X, Y) = sXY = E[(X – mX) (Y – mY)] (5.12)
Expanding the right side of the equation yields
Cov(X, Y) = E[XY – mXY – XmY + mX mY]
= E[XY] – mX E[Y] – mY E[X] + mX mY
= E[XY] – mX mY – mX mY + mX mY
= E[XY] – mX mY (5.13)
If X and Y are independent random variables then
Cov(X, Y) = E[X] E[Y] – mX mY = 0 (5.14)
5.8 Probability Theory and Random Processes
{Cov ( X , Y )}2
2
2 Ê Cov ( X , Y ) ˆ
sY – + s X2 Áa - ˜ ≥0
s X2 Ë s X2 ¯
2
s X2 s Y2 - {Cov ( X , Y )}2 Ê Cov( X , Y ) ˆ
+ s X2 Áa - ˜ ≥0
s X2 Ë s X2 ¯
In the above sum, the second term is always positive. Therefore, to satisfy the above inequality,
sX2 sY2 – {Cov(X, Y)}2 ≥0
sX2 sY 2
≥ {Cov(X, Y)}2
or Cov(X, Y) £ sX sY
Solved Problems
Solution
(a) Cov (aX, bY) = E{[aX – E(aX)] [bY – E(bY)]}
= E{[aX – aE(X)] [bY – bE(Y)]}
= ab E{[X – E(X)] [Y – E(Y)]}
= ab Cov (X, Y)
(b) Var(aX + bY) = E[(aX + bY)2] – {E[aX + bY]}2
= E[a2X2 + b2Y2 + 2abXY] – {aE[X] + bE[Y]}2
= a2 E[X2] + b2E[Y2] + 2ab E[XY] – {a2E2[X] +b2E2[Y} + 2ab E[X]E[Y]}
= a2{E[X2] – (E[X])2} + b2{E[Y2]) – (E[Y)]2} + 2ab{E[XY] – E[X]E[Y]}
= a2 var[X] + b2 var [Y] + 2ab Cov(X, Y)
Solution
Var[X] = E[X2] – {E[X]}2
È n ˘ ÈÊ n ˆ ˘ ÏÔ È n
2
˘ ¸Ô
2
So Var Í Â X k ˙ = E ÍÁ Â X k ˜ ˙ - Ì E Í Â X k ˙ ˝
ÍË k =1 ¯ ˙
ÎÍ k =1 ˚˙ ÎÍ ˚˙ ÔÓ Î
Í k =1 ˚˙ Ô˛
5.10 Probability Theory and Random Processes
= E[ X 1 + X 2 + + X n2 + 2 X1 X 2 + 2 X 2 X3 + 2 X n X1 ]
2 2
-{E [ X1 ] + E [ X 2 ] +
2 2
+ E [ X n ] + 2 E [ X1 ] E[ X 2 ] +
2
+ 2 E[ X n ] E[ X1 ]}
= Var[X1] + Var [X2] + … + Var [Xn] + 2 Cov (X1, X2) + 2 Cov (X2, X3) + …
n
= Â Var[ X k ] + 2 Â Cov( X j , X k )
k =1 jπk
Practice Problem
5.3 If X and Y are two random variables, find (i) E[(X + Y)2]. (ii) Find Var (X + Y). (iii) Under what condition is the
variance of the sum equal to the sum of the individual variance? (Cov (X, Y) = 0)
REVIEW QUESTIONS
3. Define joint moments about the origin for two random variables.
4. Define two joint central moments for two-dimensional random variables X and Y.
5. Explain in detail about the properties of covariance.
Solved Problems
5.7 Two random variables X and Y have the joint density function
1
fX, Y(x, y) = , 0 < x < 6; 0 < y < 4, what is the expected value of the function g(X,Y) = (XY)2
24
Solution
Ï1
Ô 0 < x < 6;
Given: fX, Y(x, y) = Ì 24
ÔÓ 0< y<4
• •
E[XY]2 = Ú Ú (X Y )
2
f XY ( x, y) dx dy
-• -•
46 4 6
x 2 y2 1 x3
ÚÚ dx dy = Ú y dy ◊
2
=
00
24 24 0
3
0
4 4
y3
= 3 Ú y 2 dy = 3 = 64
0
3
0
Operations on Multiple Random Variables 5.11
5.8 Find the covariance of the two random variables whose pdf is given by
fX,Y (x, y) = 2 for x > 0, y > 0, x + y < 1
= 0 otherwise
1 1 1 1
y2 y4 2 3
= Ú ( y + y - 2 y ) dy =
3 2
+ - y
0
2 4 3 0
0 0
1 1 1 1
= + - =
2 4 3 12
• •
E[X] = m10 = Ú Ú x f X ,Y ( x, y) dx dy
-• -•
1 1- x 1 Ê 1- x ˆ
= Ú Ú 2 x dy dx = 2 Ú x Á Ú dy˜ dx
Á ˜¯
0 0 0 Ë 0
1 1
= 2 Ú x(1 - x ) dx = 2 Ú ( x - x 2 ) dx
0 0
Ê 2 1 1ˆ
x x3 1
= 2Á - ˜=
ÁË 2 0
3 ˜ 3
0¯
• •
E[Y] = m01 = Ú Ú x f X ,Y ( x, y) dx dy
-• -•
1 1- y 1 1- y
= ÚÚ 2 y dx dy = Ú 2 y Ú dx dy
0 0 0 0
1
1
= Ú 2 y(1 - y) dy = 3
0
1 1 1
= - =-
12 9 36
Ú Úx
3
m30 = E[X3Y0] = E[X3] = f X , Y ( x, y) dx dy
-• -•
3 1 3 1
x 3 ( x + y )2 1
ÚÚ dx dy = ÚÚx ( x 2 + y 2 + 2 xy) dx dy
3
=
-3 -1
40 40 -3 -1
È3 1 3 È 6 1 1 1 ˘˘
1 Í 1 x x4 x5 ˙˙
=
40 Í -Ú3 -Ú1
( x 5 + x 3 y 2 + 2 x 4 y) dx dy =
40 Ú ÍÍ 6 + y2
4
+ 2y
5 ˙˙
ÍÎ -3 Î -1 -1 -1 ˚ ˙
˚
3 È 3 ˘
1 Ê 2ˆ 1 y2
=
40 Ú ÍÍ2 y ÁË 5 ˜¯ dy = 50 2 ˙=0
˙
-3 Î -3 ˚
• •
m03 = E[ X 0 Y 3 ] = E[Y 3 ] = Ú Úy
3
f X ,Y ( x, y) dx dy
-• -•
3 1 3 1
y 3 ( x + y )2 1
Ú Ú dx dy = Ú Úy ( x 2 + y 2 + 2 xy) dx dy
= 3
-3 -1
40 40 -3 -1
3 1
1
Ú Ú (y x + y 5 + 2 xy 4 ) dx dy
3 2
=
40 -3 -1
3 È 1
1 ˘
1 Í 3 x3 1 ˙
Ú +y x +y x
5 4 2
=
40 Íy 3 -1 ˙ dy
-3 Í -1 ˙
Î -1 ˚
Operations on Multiple Random Variables 5.13
1 È2 3 3 ˘
Ú y dy + 2 Ú y dy + 2 y 4 (0)˙
3 5
= Í
40 ÎÍ 3 -3 -3 ˚˙
È 3
3 ˘
1 Í 2 y4 1 ˙
= + y6 ˙=0
40 Í 3 4 3
Í -3 ˙
Î -3 ˚
• • 3 1
x 2 y ( x + y )2
m21 = E[X2Y] = Ú Ú x 2 y f X , Y ( x, y) dx dy = ÚÚ 40
dx dy
-• -• -3 -1
3 1 3 1
( x 2 + y 2 + 2 xy) 1
ÚÚx dx dy = Ú Ú (x y + x 2 y3 + 2 x 3 y 2 ) dx dy
2 4
= y(
-3 -1
40 40 -3 -1
1 È 2 ˘
3 3 3 1
2 2 4
= Í Ú ◊ y dy + Ú y3 dy + Ú x dy ˙
40 Í -3 5 3 4 ˙˚
Î -3 -3 -1
È 3 3 ˘
1 Í 2 y2 2 y4 ˙=0
= + ◊
40 Í 5 2 3 4 ˙
Î -3 -3 ˚
3 1 3 1
( x + y )2 1
m12 = E[XY2] = Ú Ú xy 2
40
dx dy =
40 -Ú3 -Ú1
xy 2 ( x 2 + y 2 + 2 xy) dx dy
-3 -1
3 1 3 1
1 1 2 3 3
Ú Ú ( x y + xy + 2 x y ) dx dy = Ú3yx
3 3 4 2 3
= dy
40 -3 -1
40 -3 -1
3
1 y4
= =0
30 4
-3
All the third-order moments are zero.
5.10 Statistically independent random variables X and Y have moments m10 = 2, m20 = 14, m02 = 12 and
m11 = –6. Find the moment m22.
Solution We know mnk = E[XnYk]. Therefore,
m20 = E[X2Y0] = E[X2] = 14
m10 = E[X1Y0] = E[X] = 2
m02 = E[X0Y2] = E[Y2] = 12
m11 = E[X1Y1] = E[XY] = –6
Since X and Y are statistically independent,
E[XY] = E[X] E[Y] = –6
E[X] = 2 fi E[Y] = –6/2 = –3
m22 = E[(X – mX)2 (Y – mY)2]
5.14 Probability Theory and Random Processes
fi m22 = 30
-• -•
11
= Ú Ú x n +1 y k ( y + 1.5) dx dy
00
11
= Ú Ú ( x n +1 y k +1 + 1.5 x n +1 y k ) dx dy
00
È n+2 1 1 ˘
xn +2
1
=Ú Íx y k +1
+ 1.5 y k ˙ dy
Ín + 2 n+2 ˙
0 Î 0 0 ˚
1 È 1 k +1 1 ˘
= ÍÚ y dy + 1.5Ú y k dy ˙
n+2 ÍÎ 0 0 ˙˚
È 1 1˘
1 Í yk + 2 y k +1 ˙
= + 1.5
n + 2 Ík + 2 k +1 ˙
Î 0 0˚
1 È 1 3 1 ˘
= Í + ˙
n + 2 Î k + 2 2 k + 1˚
1 È1 3˘
m00 = E[X0Y0] = Í2 + 2˙ = 1
2 Î ˚
1 È1 3˘ 2
m10 = E[XY0] = E[X] = 3 Í 2 + 2 ˙ = 3
Î ˚
Operations on Multiple Random Variables 5.15
1 È 1 3 1 ˘ 13
m01 = E[X0Y1] = E[Y] = + ◊ =
2 ÍÎ 3 2 2 ˙˚ 24
1 È1 3 Ê 1 ˆ ˘ 13
m11 = E[XY] = Í + ÁË 2 ˜¯ ˙ = 36
3 Î3 2 ˚
5.12 X and Y are two independent random variables such that E[X] = l1, variance of X = s12, E[Y] = l2,
Variance of Y = s22. Prove that Var [XY] = s12 s22 + l12s22 + l22s12.
5.13 Statistically independent random variables X and Y have moments m10 = 2, m20 = 16; m02 = 30;
m11 = –10. Find the moment m22.
Solution
m10 = E[X1Y0] = E[X] = 2; m11 = E[X1Y1] = –10
m11 = E[XY] = E[X] E[Y] = 2E[Y] = –10
fi E[Y] = m01 = –5
m20 = sX2 = E[X2] – {E[X]}2
= m20 – (m10)2
= 16 – (2)2 = 12
m02 = sY2 = m02 – (m01)2
= 30 – (–5)2 = 30 – 25 = 5
m22 = E[X2Y2] = E[X2] E[Y2]
= m20 m02 = 12(5) = 60
5.16 Probability Theory and Random Processes
5.14 Show that two random variables X1 and X2 with joint pdf
Ï1
x < 4, 2 < x2 < 4
f X1 , X2 ( x1 , x2 ) = ÔÌ 16
Ô0 elsewhere
Ó
are independent and orthogonal.
1
fi fX1(x1) = x1 < 4
8
The marginal density function
•
fX2(x2) = Ú f X1 , X2 ( x1 x2 ) dx1
-•
4
1 1 1
= Ú 16 dx1 = 16 (8) = 2
-4
1
fX2(x2) = 2 < x2 < 4
2
To check independence, find the product of fX1(x1) and fX2(x2) and compare it with fX1, X2 (x1, x2)
Since f X1 , X2 ( x1 , x2 ) = f X1 ( x1 ) f x2 ( x2 ) the two random variables are statistically independent.
For orthogonality,
E[X1X2] = 0
• •
E[X1X2] = Ú Ú x1 , x2 f X1 , X2 ( x1 , x2 ) dx1 dx2
-• -•
4 4
1
= Ú Ú x1 x2 16 dx1 dx2
-4 2
Operations on Multiple Random Variables 5.17
4 Ê 2 4 ˆ
1 x1
=
16 Ú 2 2 ÁÁ 2
x dx ˜ =0
˜
-4 Ë -4 ¯
Solution
Given: fX, Y(x, y) = 8e–4y 0 < x < 0.5, y > 0
=0 otherwise
(a) The marginal pdf of X is
• •
fX(x) = Ú f X ,Y ( x, y) dy = Ú 8e -4 y dy
-• 0
•
e -4 y
=8 = - 2 [ -1] = 2
(-4)
0
= 8e–4y(0.5) = 4e–4y
(b) Cov [XY] = E[XY] – E[X] E[Y]
• 0.5 • 0.5
-4 y
E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy = Ú Ú 8e dx dy
0 0 0 0
• •
-4 y e -4 y
= Ú 4e dy = 4
-4
=1
0 0
0.5 0.5
0.5
Ú xf X ( x ) dx = Ú x(2)dx = x = 0.25
E[X] = 2
0
0 0
• •
-y
ÏÔ •
• ¸Ô
E[Y] = Ú yfY ( y) dy = Ú y(4e )dy = 4 Ì- ye - y + Ú e - y dy ˝ = 4
ÓÔ ˛Ô
0
0 0 0
5.18 Probability Theory and Random Processes
5.16 The joint pdf of two continuous random variables X and Y is given by
ÔÏe - ( x + y ) 0 £ x £, •, 0 £ y £ •
fX, Y(x, y) = Ì
ÔÓ0 otherwise
Are X and Y are independent?
Solution
ÔÏe - ( x + y ) 0 £ x £, •, 0 £ y £ •
Given: fX, Y(x, y) = Ì
ÔÓ0 otherwise
The marginal pdf of X is given by
•
fX(x) = Ú f X ,Y ( x, y) dy
-•
• •
-( x + y)
= Úe dy = e - x Ú e - y dy
-• 0
Ê •ˆ
= e- x Á -e- y -x
˜¯ = e , x ≥ 0
Ë 0
• •
-( x + y)
= Úe dx = e - y Ú e - x dx
0 0
•
= e- y (-e- x ) - e- y ; y ≥ 0
0
Ê y y2 ˆ
1
1 Ê 1ˆ 1 Ê 1ˆ 1
= Ú Á + ˜ dy = Á ˜ + Á ˜ =
0 Ë 3 2 ¯ 3 Ë 2 ¯ 2 Ë 3¯ 3
• 1
1
fX(x) = Ú f X ,Y ( x, y) dy = Ú ( x + y) dy = x +
2
-• 0
• 1
Ê 1ˆ 1 1 7
E[X] = Ú x f X dx = Ú x ÁË x + 2 ˜¯ dx = 3 + 4 = 12
-• 0
• 1
1
fY(y) = Ú f X ,Y ( x, y) dy = Ú ( x + y) dy = y +
2
-• 0
1 1
Ê 1ˆ 1 1 7
E[y] = Ú yfY ( y) dy = Ú y ÁË y + 2 ˜¯ = 3 + 4 = 12
0 0
• È• • ˘
= 4 Ú Í Ú x e -2 x dx + Ú ye -2 x dx ˙ e -2 y dy
Í0
0 Î 0 ˙˚
• È • • •˘
- x -2 x 1 -2 x e -2 x
= 4Ú Í ˙ e -2 y dy
2 Ú0
e - e dx + y
Í
0 Î
2 -2 ˙
0 0 ˚
•
Ê 1 yˆ È1 • 1
• ˘
= 4 Ú Á + ˜ e -2 y dy = 4 Í Ú e -2 y + Ú ye -2 y dy ˙
0
Ë 4 2¯ ÍÎ 4 0 2 0 ˙˚
È1 1 Ê 1 ˆ ˘
= 4 Í + Á ˜˙ =1
Î8 2 Ë 4 ¯ ˚
(b) E[X + Y ] = E[X2] + E[Y2]
2 2
• •
E[X2] = Ú x 2 f X ( x ) dx = 2 Ú x 2 e -2 x dx
-• 0
È 2 • • •˘
x -2 x xe -2 x 1 -2 x ˙ = 2 Ê 1ˆ = 1
= 2 Í- e - - e ÁË 4 ˜¯ 2
Í 2 2 4 ˙
Î 0 0 0
˚
Similarly,
1
E[Y2] =
2
1 1
E[X2 + Y2] = + =1
2 2
(c) E[XY] = E[X] E[Y]
•
E[X] = Ú xf X ( x) dx
-•
•
-2 x
= 2 Ú xe dx
-•
Èx •
1 -2 x ˘
•
= 2 Í e -2 x
2 Ú0
- e dx ˙
ÍÎ 2 0 ˙˚
È1˘ 1
=2Í ˙=
Î4˚ 2
Similarly,
1
E[Y] =
2
Operations on Multiple Random Variables 5.21
1 Ê 1ˆ 1
E[XY] = E[X] E[Y] =
2 ÁË 2 ˜¯ = 4
5.19 Two random variables X and Y have the following joint probability density function
fX,Y(x, y) = 2 – x – y for 0 £ x £ 1 and 0 £ y £ 1
=0 elsewhere
Find var(X) and var (Y).
Solution
Given: fX, Y(x, y) = 2 – x – y for 0 £ x £ 1 and 0 £ y £ 1
=0 elsewhere
• 1
fX(x) = Ú f X ,Y ( x, y)dy = Ú (2 - x - y)dy
-• 0
1
1
y2 1
= (2 - x ) y - =2-x-
0
2 2
0
•
fY(y) = Ú f X ,Y ( x, y) dx
-•
1
3
= Ú (2 - x - y) dx = 2 - y
0
1 1
2Ê3 ˆ
Ú x f X ( x) dx = Ú x ÁË 2 - x˜¯ dx
2
E[X2] =
0 0
1 1
3 x3 x4 1 1 1
= - = - =
2 3 4 2 4 4
0 0
1 1
Ê3 ˆ
E[X] = Ú x f X ( x) dx = Ú x ÁË 2 - x˜¯ dx
0 0
1 1
3 x2 x3 3 1 5
= - = - =
2 2 3 4 3 12
0 0
2
1 Ê 5ˆ 11
= -Á ˜ =
4 Ë 12 ¯ 144
11
Similarly s Y2 =
144
5.22 Probability Theory and Random Processes
TRANSFORMATIONS OF MULTIPLE
RANDOM VARIABLES 5.4
Consider a random variable Z = g(X, Y) which is a function of two random variables X and Y. The expectation
of g(X, Y) can be obtained using Eq. (5.1) which is given by
• •
Z = E[g(X,Y)] = Ú Ú g( X , Y ) f X ,Y ( x, y)dxdy (5.23)
-• -•
where fX,Y(x, y) is the joint pdf of X and Y. In the above equation, to find expectation of g(X,Y), we used the
joint pdf of X and Y. That is, to find expectation of g(X,Y), it is not necessary to find the density function of
the new random variable Z. However, in some practical problems, it may be required to determine the density
function of the transformed variable Z, given the joint pdf of X and Y. In this section, we will study the method
of finding the density function for a single functional transformation of more than one random variable.
Consequently, we also find the pdf of two functions of two random variables.
= Ú Ú fX , X
1 2 XN ( x1 , x2 , , x N ) dx1 dx2 dx N (5.24)
[ g ( x1, x2 , xN ) £ y ]
dFY ( y)
The pdf is fY(y) = y
dy
Let us consider one function of two random variables given by Z
= g(X, Y). Then
FZ(z) = P(Z £ z) = P(g(X, Y) £ z) = P{(X, Y) Œ Rz}
x
= ÚÚ f X ,Y ( x, y) dx dy (5.25)
x , y Œ Rz
where Rz in the xy plane represents the region where the inequality
g(X, Y) £ z is satisfied
d Fig. 5.2 Rz in the xy plane
fZ(z) = F (z)
dz Z
Solved Problems
5.20 Consider the random variables X and Y with pdfs fX(x) and fY(y). Determine the pdf of fZ(z) if (a) Z
= X + Y, (b) Z = X – Y, (c) Z = X 2 + Y 2 , (d) Z = max (X, Y), (e) Z = min (X, Y), and (6) Z = X/Y.
Solution
(a) Given Z = X + Y. We know FZ(z) = P(Z £ z) = P(X + Y £ z)
we know
• •
FZ (z) = Ú Ú f X ,Y ( x, y) dx dy
-• -•
Operations on Multiple Random Variables 5.23
FZ(z) = Ú Ú f X ,Y ( x, y) dy
-• - •
Differentiating
•
d
fZ(z) = F ( z ) = Ú f X ,Y ( y + z, y) dy (5.29)
dZ 2 -•
z>0 z<0
y y y z
z
=
=
x–y=z y
y
–
–
x
x
0 x
x–y£z z –z
x
–z
(a) (b) (c)
Fig. 5.4
5.24 Probability Theory and Random Processes
(c) Z = X2 + Y2
The above equation represents the area of a circle with radius z
FZ(z) = P(x2 + y2 £ z) = ÚÚ f X ,Y ( x, y) dx dy
x 2 + y2 £ z
The area of the shaded region can be obtained by integrating over the horizontal strip along the
x-axis from - z - y 2 to z - y 2 followed by sliding the strip along y-axis from - z to z .
z z - y2
FZ(z) = Ú Ú f X ,Y ( x, y) dx dy
- z - z - y2
z
f X ,Y ( z - y 2 , y) + f X ,Y (- z - y 2 , y)
fi fZ(z) = Ú dy (5.34)
- z 2 z - y2
(d) Z = max (X, Y) Fig. 5.5
If X > Y, max (X, Y) is equal to X and if X £ Y, then max (X, Y) is equal
to X. Therefore, we can write Z = max (X, Y) = X for X > Y
= Y for X £ Y
The distribution function of Z is given by
FZ(z) = P(max (X, Y) £ z)
Operations on Multiple Random Variables 5.25
Figures 5.6 and 5.7 show the region satisfying the inequalities
P(X £ z, X > Y) and P(Y £ z, X £ Y)
This region can be obtained by drawing two straight lines x = y and y = z. The intersection of these
two lines is at point (z, z). The total region is shown in Fig. 5.8 from which we get
FZ(z) = P(X £ z, Y £ z) = FX,Y(z, z)
If X and Y are independent,
FZ(z) = FX(z) FY(z)
d
fZ(z) = F (z)
dz Z
Fig. 5.8
= FX(z) fY(z) + FY(z) fX(z) (5.35)
(e) Z = min (X, Y)
If x £ y then min (x, y) = x and if x > y, min (x, y) = y.
That is,
Z = Y for X > Y
= X for X £ Y
FZ(z) = P(min (X, Y) £ z)
= P(Y £ z, X > Y) + P(X £ Z, X £ Y) (5.36)
The regions satisfying the inequality
P(Y £ z, X > Y) and P(X £ z, X £ Y) are shown in Figs 5.9 and 5.10.
y
x=z
x=y
y x=y
x
y=z
yƕ x < yz
x
x/y = z y=z
y>0 x > yz
x = –•
y=0
x = yz
xƕ
y=–•
5.21 Find FW, Z(w, z) in terms of FX, Y(X, Y). W = max (X, Y), Z = min (X, Y).
Solution We know
FW, Z(w, z) = P(W £ w, Z £ z) = P{max (X, Y) £ w, min (X, Y) £ z}
From Fig. (5.14), we can find that W £ w if and only if (X, Y) lies in the south-west region. Similarly Z £ z
if and only if (X, Y) lies in the region shown in Fig. 5.15. Therefore, W £ w and Z £ z if and only if (X, Y) lies
in the intersection of the two regions if w < z, the intersection, is the smallest subsection.
P(W £ w, Z £ z) = P(W £ w)
= FX, Y(w, w) for w < z
If w ≥ z
P(w £ w, z £ z) = FX,Y(w, w) – P(z £ X £ w, z £ Y £ w)
= FX,Y(w, w) – [FX,Y(w, w) – FX,Y(w, z) – FX,Y(z, w) + FX,Y(z, z)]
= FX,Y(z, w) + FX,Y(w, z) – FX,Y(z, z) (5.43)
y
y
x=z
(w , w ) (z, z)
x
x
x=w
w≥z w<z
Fig. 5.14 Fig. 5.15
5.28 Probability Theory and Random Processes
5.22 Consider two random variables X and Y with pdfs fX(x) = e–x u(x) and fY(y) = e–y u(y). If X and Y
are independent and identically distributed, find the pdf of (a) Z = X + Y, (b) Z = X – Y, (c) Z = max (X, Y),
and (d) Z = min (X, Y).
Solution Given: fX(x) = e–x u(x); fY(y) = e–y u(y). Also, fX,Y(x, y) = e–(x + y) u(x) u(y)
(a) Z =X+Y
From Eq. (5.26), we have
•
fZ(z) = Ú f X ,Y ( z - y, y) dy
-•
Since fX(x) = 0 for x < 0 and fY(y) = 0 for y < 0,
z
fZ(z) = Ú f X ,Y (z - y, y) dy
0
z z
z
-( z - y + y)
= Úe dy = Ú e - z dy = e - z z
0
0 0
–z
= ze ; z > 0
(b) Z=X–Y
From Eqs 5.32 and 5.33
•
fZ(z) = Ú f X ,Y (z + y, y) dy, z ≥ 0
0
•
= Ú f X ,Y ( z + y, y) dy, z < 0
-z
• •
-( z + y + y)
fZ(z) = Úe dy = e - z Ú e -2 y dy
0 0
•
-z e -2 y 1 -z 1
= e =- e (-1) = e - z ; z ≥ 0
-2 2 2
0
•
fZ(z) = Ú f X ,Y ( z + y, y) dy, z < 0
-z
• •
-( z + y + y) -z e -2 y -e- z 2 z 1
= Úe dy = e = (e ) = e 2 z - z
-z
-2 2 2
-z
1
= ez ; z < 0
2
Operations on Multiple Random Variables 5.29
5.23 If X and Y are independent identically distributed random variables with X ~ N(0, s2) and Y ~
N(0, s2), find the pdf of (a) Z = X 2 + Y 2 , (b) Z = X2 + Y2.
1 2
/2s 2 1 2
/2s 2
fX(x) = e- x ; fY ( y) = e- y
2p s 2p s
1 2
+ y2 )/2s 2
FX,Y(x, y) = fX(x) fY(y) =
2
e-( x
2p s
(a) Z = X2 + Y 2
X2 + Y2 = Z2
5.30 Probability Theory and Random Processes
z
1 2z 2
/2s 2
= 2 Ú e- z dy
2ps -z z -y2 2
z
1 2
/2a 2 1
= 2
2 ze - z Ú dy Let y = z sin q, dy = dz cos q
2ps -z z - y2
2
2
/2s 2 p /2
2 z e- z
=
2p s 2
Ú dq
- p /2
z 2
/2s 2
= 2
e- z for z ≥ 0
s
z 2
/2s 2
fi FZ(z) = e- z u( z )
s2
(b) Given z = x2 + y2 z
2 2 p /2
e - z /2s e - z /2s
z
1
=
(2p s 2 )
Ú dy =
2p s 2
Ú dq
- z z - y2 - p /2
2
e - z /2s
= for z ≥ 0
2 s2
2
e - z /2s
fZ(z) = u (z)
2 s2
5.24 Given
Ï x + y 0 £ x £ 1, 0 £ y £ 1
fX,Y(x, y) = Ì
Ó0 otherwise
Find the density of Z = X + Y.
Solution Given: Z = X + Y
Ï x + y 0 £ x £ 1, 0 £ y £ 1
and fX,Y (x, y) = Ì
Ó0 otherwise
From Fig. 5.18, the area of the region-I can be obtained by sliding y=1
1
the horizontal strip along the y-axis from 0 to z and along x-axis from 0
to z – y. Therefore,
x=z–y
z z- y I
FZ(z) = ÚÚ ( x + y) dx dy 0
x=1
0 0 x+y=z
z-y
Ê x2
z
ˆ
= ÚÁ + yx ˜ dy Fig. 5.18
0Ë ¯
2
0
ÏÔ ( z - y)2
z
¸Ô
= ÚÌ + y( z - y) ˝ dy
0ÓÔ 2 Ô˛
z
ÏÔ z 2 + y 2 - 2 yz + 2 yz - 2 y 2 ¸Ô
= Ú ÌÔ ˝ dy
0Ó
2 ˛Ô
z
z
Ê z2 - y2 ˆ z2 1 3
z
= Ú ÁË 2 ˜¯ dy =
2
y -
6
y
0
0 0
z3 z3 2 z3 z3
= - = =
2 6 6 3
d d Ê z3 ˆ
Fz ( z ) = = z2
dz ÁË 3 ˜¯
fZ(z) =
dz
5.32 Probability Theory and Random Processes
È 1
1
1 ˘
1Í 1 1 3 ˙
= 1- y - z 2
y + y + y 2
2 Í z -1 z -1 3 z -1 ˙
ÎÍ ˙
z -1 ˚
Ê4 z3 ˆ
= 1 - Á - z2 + ˜
Ë3 3¯
3
= - 1 + z2 - z
3 3
d
fZ(z) = F ( z ) = 2 z - z 2 = z(2 - z ); 1 < z < 2
dz z
ÔÏ z 2 0 £ z £1
fZ(z) = Ì
ÔÓ z(2 - z ) 1 < z < 2
Practice Problems
1 -u
5.5 The joint pdf of X and Y is fX,Y (x, y) = e–(x + y). Find the pdf of U = X – Y (Ans. fU(u) = e )
2
5.6 If X and Y are independent random variables with joint pdf fX,Y (x, y), find the density function Z = X/Y.
•
(Ans. Ú y f X ( yz ) fY ( y) dy )
-•
5.4.2 Two Functions of Two Random Variables
Consider two random variables X and Y with joint pdf fX,Y(x, y). Let us define two random variables Z and W
which arise as functions of X and Y. That is, Z = g(X,Y) and W = h(X,Y). Let us assume that the functions Z
and W satisfy the following conditions.
1. The equations z = g(x, y) and W = h(x, y) can be uniquely solved for x and y in terms of z and w. The
solutions are given by X = g (z, w) and y = h (z, w).
2. The functions g and h have continuous partial derivatives at all points (x, y).
3. The determinant
∂g ∂g
∂x ∂y
J(x, y) = (5.48)
∂h ∂h
∂x ∂y
∂g ∂h ∂g ∂h
= - π 0 at all points (x, y)
∂x ∂y ∂y ∂x
If the above conditions are satisfied then the random variables Z and W are jointly continuous with density
function
fZ,W(z, w) = fX,Y(x, y) |J(x, y)|–1 (5.49)
where x = g (z, w) and y = h(z, w)
In case of multiple solutions like (x1, y1), (x2, y2) …, (xn, yn)
f X ,Y ( x1 , y1 ) f X ,Y ( x2 , y2 ) f X ,Y ( xn , yn )
fZ, W(z, w) = + + + (5.50)
J ( x1 , y1 J ( x2 , y2 J ( xn , yn )
REVIEW QUESTIONS
6. Explain how you obtain the pdf of a single function of several random variables.
7. Explain how you obtain the pdf of two functions of two random variables.
Solved Problems
5.25 Let X and Y be jointly continuous random variables with pdf fX,Y(x, y). Let Z = X + 2Y and W = X –
2Y. Find the joint pdf of Z and W in terms of fX,Y.
Solution Let g(X, Y) = X + 2Y
and h(X, Y) = X – 2Y
∂g ∂g
∂x ∂y 1 2
Then J(X,Y) = = = –2 – 2 = –4
∂h ∂h 1 -2
∂x ∂y
5.34 Probability Theory and Random Processes
5.26 Let X1 and X2 be jointly continuous random variables with pdf f X1 , X2 ( x1 , x2 ) . Let Y1 = aX1 + bX2
and Y2 = CX1 + dX2 are new random variables, where a, b, c and d are real constants.
Find the joint pdf of X1 and X2 in terms of fX1, X2.
Solution
Let Y1 = g(X1, X2) = aX1 + bX2
Y2 = h(X1, X2) = cX1 + dX2
∂g ∂g
∂X1 ∂X 2 a b
J(X1, X2) = = = ad - bc
∂h ∂h c d
∂X1 ∂X 2
The solution of equations
y1 = ax1 + bx2
y2 = cx1 + dx2
is given by
dy1 - by2 ay - cy1
x1 = and x2 = 2
ad - bc ad - bc
Auxiliary Variables Consider the transformation z = g(X,Y) where X and Y are two random variables. To
determine the pdf of Z, first we define an auxiliary variable W = X or W = Y and then obtain the joint density
of Z and W using Eq. (5.49). Next, we can obtain fZ(z) from fZ,W(z, w) by proper integration.
5.27 If X and Y are independent random variables each following N(0, 2), find the pdf of Z = 2X + 3Y.
Solution Given Z = 2X + 3Y
We define the auxiliary random variable W = Y
Operations on Multiple Random Variables 5.35
• 1 2
1 - ( z + 9w 2 - 6w z + 4w 2 )
=
8p Úe 16 dw
-•
•
1 - z2 /16 2
= e Ú e - (13w - 6 wz )/16 dw
8p -•
• 13 6 wz
1 - z2 /16 - ( w2 - )
Ú
2
= e e 16 13 dw 13 Ê 3z ˆ
8p -•
Let
16 ÁË w - 13 ˜¯ = p
• 13 3z
1 - z2 /16 9 z2 /(13 ¥ 16) - ( w - )2 Ê 3z ˆ 16
=
8p
e e Ú 16 13 dw
e 2 Á w - ˜ dw =
Ë 13 ¯ 13
dp
-•
• 13 3z 16 p 16
- ( w - )2
1 - z2 /52 2 dw = dp
=
8p
e Úe 16 13 dw 13 13
-•
1 16 -1/2
• 1 dw = p
1 16 Ê 1 ˆ - z2 /52 -
2 13
= Á ˜ e Ú e - p p 2 dp
8p 13 Ë 2 ¯ -•
•
-p
16 Ê 1 ˆ 2 1 1 - z 2 /52 Úe p -1/2 dp = p
= Á ˜ p e - z /52 = e
52 Ë 8p ¯ 8 p (2 13) -•
5.28 The random variables X and the have joint density function
Ï xy
Ô for 0 < x < 3, 1 < y < 3
fX,Y(x, y) = Ì 18
Ô0 otherwise
Ó
Find the density function of Z = X + 2Y.
Solution Given: Z = X + 2Y
Consider an auxiliary random variables W = Y
Now we solve the equations z = x + 2y
W =y
To get the solutions y = w
x = z – 2w
The Jacobian of the transformation is
∂z ∂z
∂x ∂y 1 2
J(x, y) = = =1 fi |J(x, y)| = 1
∂w ∂w 0 1
∂x ∂y
The joint density function of Z and W is
fZ,W(z, w) = f X ,Y ( x, y) x = z - 2 w
y=w
= fX,Y(z – 2w, w)
Operations on Multiple Random Variables 5.37
Ï ( z - 2 w) w 0 < ( z - 2 w) < 3
Ô ;
= Ì 18 1< w < 3
Ô0 elsewhere
Ó
The density function of z is
•
fZ(z) = Ú fZ ,W ( z, w) dw
-•
w
z =0
2w
—
2 z–
w=3
3
3
2 z/2 2 w=
z–
1
1 z–3
——
2
0 z
1 2 3 4 5 6 7 8 9
–1
–1.5
–2
Fig. 5.20
z /2 z /2
Ê z - 2 w) ˆ 1
Ú ÁË 18 ˜¯ w dw = 18 Ú (z - 2w) w dw
1 1
Ï z /2
z /2 ¸
1 Ô w2 w3 Ô
= Ìz -2 ˝
18 Ô 2 3 Ô
1
Ó 1 ˛
Ï Ê 2 ˆ Ê 3 ˆ¸
= 1 ÔÌ z z - 1 - 2 z - 1 Ô˝
Á
18 ÔÓ 2 Ë 4 ˜ Á ˜
¯ 3Ë 8 ¯ Ô˛
1 ÏÔ z 3 z z 3 2 ¸Ô
= Ì - - + ˝
18 ÔÓ 8 2 12 3 Ô˛
1 ÏÔ z 3 z 2 ¸Ô
= Ì - + ˝
18 ÔÓ 24 2 3 Ô˛
5.38 Probability Theory and Random Processes
z /2
fZ(z) = Ú fZ ,W ( z, w) dw for 5 £ z £ 6
z -3
2
z /2
1
=
18 Ú ( z - 2 w) w dw
z -3
2
1 ÏÔ z È z 2 ( z - 3)2 ˘ 2 È z 3 z - 3)3 ˘ ¸Ô
= Ì Í - ˙- Í - ˙˝
18 ÓÔ 2 ÎÍ 4 4 ˚˙ 3 ÍÎ 8 8 ˚˙ ˛Ô
1 ÏÔ z Ê z2 z2 3 9ˆ 2 Ê z 3 z 3 27 9 z 2 27 z ˆ ¸Ô
= Ì Á 4 - 4 + 2z- -
4 ˜¯ 3 Á 8 - 8 + 8 + 8 - 8 ˜˝
18 ÓÔ 2 Ë Ë ¯ ˛Ô
1 ÏÔ z Ê 3 9ˆ 2 Ê 9 z 2 27 z 27 ˆ ¸Ô
Ì z- ˜ - Á 8 - 8 + 8 ˜˝
18 ÓÔ 2 ÁË 2
=
4¯ 3 Ë ¯ Ô˛
1 Ê 9 9ˆ z - 2
= z- ˜ =
18 ÁË 18 4¯ 36
3
fZ(z) = Ú fZ ,W ( z, w) dw for 6 £ z £ 9
z- 3
2
3
1
=
18 Ú ( z - 2 w) w dw
z- 3
2
Ï 3 ¸
1 ÔÔ z 3
2 ÔÔ
= Ì w
2
- w3 ˝
18 2 z -3 3
Ô 2 z -3 Ô
ÔÓ 2 Ô ˛
1 ÏÔ z Ê ( z - 3)2 ˆ 2 Ê ( z - 3)3 ˆ ¸Ô
= Ì Á9 - - Á 27 - ˝
18 ÓÔ 2 Ë 4 ˜¯ 3 Ë 8 ˜¯ ˛Ô
1 ÏÔ z Ê 27 - z 2 + 6 z ˆ 2 Ê 243 - z 3 + 9 z 2 - 27 z ˆ ¸Ô
= Ì Á ˜-3Á ˜˝
18 ÓÔ 2 Ë 4 ¯ Ë 8 ¯ ˛Ô
1 Ïz 1 ¸
= Ì (27 - z + 6 z ) -
2
(243 - z 3 + 9 z 2 - 27 z ) ˝
18 Ó 8 12 ˛
1 ÔÏ 27 z - z 3 + 6 z 2 243 - z 3 + 9 z 2 - 27 z Ô¸
= Ì - ˝
18 ÓÔ 8 12 ˛Ô
Operations on Multiple Random Variables 5.39
1 ÏÔ 3(27 z - z 3 + 6 z 2 ) - 2(243 - z 3 + 9 z 2 - 27 z ) ¸Ô
= Ì ˝
18 ÓÔ 24 ˛Ô
1 ÏÔ (81z - 3z 3 + 18 z 2 ) - (486 - 2 z 3 + 18 z 2 - 54 z ) ¸Ô
= Ì ˝
18 ÔÓ 24 Ô˛
1 ÏÔ - z 3 + 135z - 486 ¸Ô
= Ì ˝
18 ÔÓ 24 Ô˛
5.29 If X and Y are Independent random variables with density functions fX(x) = e–x u(x), fY(y) = 2e–yu(y);
find the density function of Z = X + Y.
Practice Problems
5.7 X and Y are two independent random variables with density fX(x) = 2e–2x for x > 0 and fY(y) = 2e–2y for y > 0. Find
the density of the random variable Z = X – Y. (Ans. fZ(z) = e2z for z < 0
= e–2z for z > 0)
Solved Problems
5.30 Let (X, Y) be a two-dimensional non-negative continuous random variable having the density
ÏÔ4 xy e - ( x 2 + y2 ) , x ≥ 0, y ≥ 0
fX,Y(x, y) = Ì
ÔÓ0 elsewhere
Solution
Z = X +Y
2 2
Given:
Consider an auxiliary random variable W = X
y y
|J(x, y)| = - =
x +y
2 2
x + y2
2
f X ,Y (w, z 2 - w2 ) x 2 + y2 2 z 2 2
fZ,W(z, w) = = 4w z 2 - w2 e- z = 4 wy e - z = 4 wz e -2
y y y
x 2 + y2
5.31 Consider two independent random variables X and Y with identical uniform distribution in (0, 1). (i)
Find the joint density function of Z = X + Y and W = X – Y (ii) Find the density function of Z and W.
|J(x, y)| = 2
The joint density function of Z and W is
1 z+w
fZ,W(z, w) = for 0 £ £1
2 2
z-w
0£ £1
2
The range of Z and W is 0 £ z + w £ 2 and 0 £ z – w £ 2
From Fig. 5.23, we find that in Region I,
w z=w
z
z+w=2
fZ(z) = Ú fZ ,W ( z, w) dw for 0 £ z £ 1
-z
2-z
fZ(z) = Ú fZ ,W ( z, w) dw for 1 £ z £ 2
I II z
z-2
z z
1 1
fZ(z) = Ú dw = w =z z = –w
-z
2 2 -z z–w=2
For 1 £ z £ 2
Fig. 5.23
2-z 2-z
1 1 1
fZ(z) = Ú dw = w = [2 - z - ( z - 2)] = 2 – z
z-2
2 2 z-2 2
fZ(z) = z; 0; £ z £ 1
= 2 – z; 1 £ z £ 2
From Fig. 5.24 we find that, in Region I ,
w z = –w
2-w
z+w=2
fW(w) = Ú fZ ,W ( z, w) dz for 0 £ w £ 1
w (1, 1)
2-w
1 (2, 0)
= z =1- w I
2 w
II z
w+2 (1, 2)
fW(w) = Ú fZ ,W ( z, w) dz for –1 £ w £ 0
z–w=2
z = –w
-w
w+2
1
= z = w +1 Fig. 5.24
2 -w
fi fW(w) = 1 – w for 0 £ w £ 1
= w + 1 for – 1 £ w £ 0
5.32 If X and Y are independent uniform random variables on (0, 1), find the distribution of XY and X/Y.
Operations on Multiple Random Variables 5.43
Let Z = X/Y and auxiliary random variable w = y. From the transformation z = x/y and w = y
The solution for x and y is y = w and x = zw w
z=w
The Jacobian of the transformation is given by
w=1
∂z ∂z
1 x
∂x ∂y - 1
J(x, y) = = y y2 =
∂w ∂w y
0 1
∂x ∂y
z
The joint pdf of z and w is
f XY ( x, y) f XY ( zw, w) Fig. 5.25
fZ,W(z, w) = = =w
| J ( x, y)| x = zw 1/y
y=w
The range of z and w can be obtained as follows w
For 0 £ x £ 1 0 £ zw £ 1
For 0 £ y £ 1 0£w£1 w=1
In Region R1 of Fig. 5.26, 0 £ z £ 1
R1
1 1 1
w2 1
Ú
z
fZ(z) = fZ ,W ( z, w) dw = Ú w dw = = 0 1
R2
0 0
2 2
0
Fig. 5.26
5.44 Probability Theory and Random Processes
In Region R2, 1 £ z £ •
1/ z
fZ(z) = Ú fZ ,W ( z, w) dw
0
1/ z 1/ z
w2 1
= Ú w dw =
2
=
2z2
0 0
Ï1
ÔÔ 2 for 0 £ z £ 1
fi fZ(z) = Ì
Ô 1 , for 1 £ z £ •
ÔÓ 2 z 2
È w (1 - z ) ˘
f XY ( x, y) w - ÍÎw + ˙ w
fZ,W(z, w) = = e z ˚ = e- w / z
| J ( x, y ) | z 2 z2
Operations on Multiple Random Variables 5.45
• •
w 1
= Ú z2 e - w / z dw = Úwe
-w/ z
dw
0 z2 0
1 È ˘
•
•
= Í- wz e - w / z + z Ú e - w / z dw ˙
z 2 ÍÎ 0
0 ˙˚
È •˘
1 Í e- w / z ˙
= 0 + z
z2 Í (-1 / z ) ˙
Î 0 ˚
1
= (z2 ) = 1
z2
fZ(z) = 1 for 0 £ z £ 1
Practice Problems
5.8 If X and Y are independent exponential random variables with common parameter l, show that X/(X + Y) is a
uniformly distributed random variable in (0, 1).
5.9 If X and Y are independent exponential random variables each having parameter l, find the joint density function of
Ê l 2e- l z ˆ
Z = X + Y and W = ex. Á Ans. , w ≥ 1, z ≥ log w˜
Ë w ¯
5.10 If X and Y be two independent normal variables with mean 0 and variance s2, find the joint density function of V
and W, where V = X cos q + Y sin q, W = X sin q – Y cos q, where q is a constant angle.
Ê 1 2 2 2 ˆ
ÁË Ans. fV ,W (v, w) = e - ( x + y /2s ) ˜
2ps 2 ¯
5.11 If X and Y are two independent and identically distributed random variables, having the density function fX(x) =
xe–x, x ≥ 0 then prove that U and V are independent given U = X/Y and V = X + Y.
5.12 Given U = X/Y and V = Y. Then give a relation between pdf of (X, Y) and (U, V)
Ans. fU,V(u, v) = |v| fX,Y(x, y)
5.46 Probability Theory and Random Processes
Solved Problem
5.34 x and y are independent and identically distributed normal random variables with zero mean and
common variance s2. Find the pdf of z = x 2 + y2
Solution
z= x 2 + y 2 ; x ~ N(0, s2) and Y ~ N(0, s2)
1 2
/2s 2 1 2
/2s 2
fi fX(x) = e- x ; fY ( y) = e- y
2ps 2ps
Since x and y are independent,
fX,Y(x, y) = fX(x) fY(y)
1 2
/ y2 )/2s 2
= 2
e-( x
2p s
The transformation is given by
z = x 2 + y 2 fi z2 = x2 + y2
Define an auxiliary variable w = y
x y
x z 2 - w2
J(x, y) = z z = =
3 z
0 1
The solutions are given by
y =w
x = ± z 2 - y2 = ± z 2 - w2
f X ,Y ( x1 y1 ) f X ,Y ( x2 , y2 )
fZ,W(z, w) = +
| J ( x1 , y1 ) | | J ( x2 , y2 ) |
z È 2e - z 2 ˘ ze - z
2
= Í ˙=
z 2 - w2 ÍÎ 2ps ˙˚ ps 2 z 2 - w2
2
5.35 The joint pdf of X and Y is given by fX,Y(x, y) = e–(x+y), x > 0, y > 0. Find the pdf of Z = (X + Y)/2.
∂z ∂z
1 1
∂x ∂y 1
J(x, y) = = 2 2 =
∂w ∂w 2
0 1
∂x ∂y
The joint pdf of the random variable Z and W is
f X ,Y ( x, y)
fZ,W(z, w) = = 2 e - (2 z - w + w ) = 2e -2 z
| J ( x, y)|
The range of z and w is obtained as follows:
Since y > 0; w > 0
y = 0 maps to w = 0
x = 0 maps to w = 2z or z = w/2
For x > 0; 2z – w > 0 or z > w/2
Therefore, the range of w is 0 < w < 2z
• 2z
-2 z
fZ(z) = Ú fZ ,W ( z, w) dw = Ú 2e dw = 2 e -2 z (2 z ) = 4 ze -2 z
-• 0
5.36 If X and Y are independent random variables each following N(0, s2), find the density function of
R= X 2 + Y 2 and q = tan–1 (Y/X).
Solution
1 2
/2s 2
Given: fX(x) = e- x
s 2p
1 2
/2s 2
fY(y) = e- y
s 2p
Since X and Y are independent,
fX,Y(x, y) = fX(x) fY(y)
1 2 2 2
= e - ( x + y )/2s
2ps 2
Let us consider x = r cos q and y = r sin q
∂r ∂r x y
∂x ∂y x + y2
2
x + y2
2
J= =
∂q ∂q -y x
∂x ∂2y 2 x 2 + y 2 x + y2
2
x +y 1
= =
x2 + y (x + y ) r
2 2 2
5.48 Probability Theory and Random Processes
f X ,Y (r , q ) |r | 2
/2s 2 r≥0
fR,q (r, q) = = e-r
|J | 2p r 2 0 £ q £ 2p
5.37 Let X and Y be independent Gaussian random variables that have zero mean and unit variance.
Show that Z = X/Y is a Cauchy random variable.
Solution Let Z = X/Y and auxiliary random variable W = Y. From the transform z = x/y and w = y, the
solution for x and y is y = w and x = wz.
The Jacobian for the transformation is given by
∂z ∂z
1 x
∂x ∂y 1
J(x, y) = = y y2 =
∂w ∂w y
0 1
∂x ∂y
f X ,Y ( x, y)
fZ,W (z, w) = = y f XY ( zw, w)
J ( x, y )
= |w| fX,Y(zw, w)
•
fZ(z) = Ú | w | f X ,Y (zw, w) dw
-•
1 2 1 2
Given fX(x) = e- x .2
; fY ( y) = e- y /2
2p 2p
1 - ( x 2 + y2 )/2
fX,Y(x, y) = e
2p
1 - ( z2 w2 + w2 )/2
fX,Y(zw, w) = e
2p
•
1 - ( z 2 w2 + w2 )/2
fZ(z) = Ú | w | 2p e dw
0
•
2 - ( z 2 w2 + w2 )/2
=
2p Úwe dw
0
•
1 - ( z 2 w2 + w2 )/2
=
p Úwe dw
0
•
1 - w2 (1 + z 2 )/2
fZ(z) =
p Úwe dw
0
Let (1 + z2)/2 = a Let aw2 = t
• 2 aw dw = dt
1 - aw2 dw
Then fZ(z) =
p Úwe w dw =
dt
0
2a
Operations on Multiple Random Variables 5.49
•
1 -t dt
fi fZ(z) =
p Úe 2a
0
-1 - t •
= e
2 ap 0
1 1 1
= = =
2 ap Ê 1 + z2 ˆ p (1 + z 2 )
2Á ˜ p
Ë 2 ¯
which is a Cauchy distribution.
5.38 If X and Y are independent random variables with pdf e–x, x ≥ 0 and e–y, y ≥ 0, find the density
functions of Z = X/(X + Y) and W = X + Y. Are Z and W independent?
Solution
Given: Z = X/(X + Y)
and W =X+Y
fX(x) = e–x, x ≥ 0 and fY(y) = e–y, y ≥ 0
Since X and Y are independent,
fX,Y(x, y) = fX(x) fY(y) = e–(x + y) x > 0, y > 0
The transformation is
z = x/(x + y); w = x + y
from which x = wz; y = w(1 – z)
Since x ≥ 0 and y ≥ 0,
w =x+y≥0
y ≥ 0 fi w(1 – z) = 0 fi w ≥ 0 and z £ 1
x = wz ≥ 0 fi z ≥ 0. Hence, 0 £ z £ 1
f X ,Y ( z, w)
fZ,W(z, w) =
| J ( x, y)|
∂z ∂z
∂x ∂y y -x x+y 1 1
J(x,y) = = = = =
∂w ∂w ( x + y )2 ( x + y )2 ( x + y )2 x+y w
∂x ∂y 1 1
•
• •
-w
fZ(z) = Ú we dw = - we - w e- w =1
0 0
0
= fZ,W(z, w) = we–w
fZ(z) = 1 and fW(w) = we–w
Since fZW(z, w) = fZ(z) fW(w)
Z and W are independent
Practice Problems
5.13 X and Y are independent random variables with fX(x) = e–x u(x) and fY(y) = 3e–3y u(y). Find the density function of
Z = X/Y. (Ans. 3/(z + 3)2)
5.14 If X ~ U(0, 1) and Y ~ U(0, 1), find the joint pdf of X + Y and X – Y. Assume X and Y are independent.
(Ans. fZ,W(z, w) = 1/2, 0 < |w| < z < 2)
5.15 Show that the convolution of two Cauchy densities is a Cauchy density.
Solved Problems
5.39 If X and Y are independent g(t1, l) and g(t2, l) variates respectively, find the distribution of X + Y
and X/(X + Y).
Solution Given
l e - l x (l x )t1 - 1
fX(x) = ,x≥0
G (t1 )
=0 x<0
le ( l y )t 2 - 1
-l y
fY(y) = ,y≥0
G (t 2 )
= 0, y < 0
(l )t1 + t2 x t1 - 1 e - l x e - l y y t2 - 1
fX,Y(x, y) = ; x > 0, y > 0
G (t1 ) G (t2 )
x
Let z = x + y and w =
( x + y)
Solving for x and y, we get x = wz and y = z (1 – w)
Operations on Multiple Random Variables 5.51
∂z ∂z
1 1
∂x ∂y -1 -1
J(x, y) = = y -x = =
∂w ∂w x+y z
( x + y )2 ( x + y )2
∂x ∂y
1
J|(x, y)| =
z
f X ,Y ( x, y)
fZ,W(z, w) = = z f X ,Y ( x, y)
| J ( x, y)|
= z fX(x) fY(y)
= z fX(wz) fY[z(1 – w)]
5.40 If X and Y are two independent chi-square random variables with n1 and n2 degrees of freedom then
Ê n1 n2 ˆ
prove that X/Y is a b2 Á , ˜ random variable.
Ë 2 2¯
Solution
n1
n1
1 Ê 1ˆ 2 -1
fX(x) = ÁË 2 ˜¯ x2 e - x /2 ; x > 0
G (n1 /2)
n2
n2
1 Ê 1ˆ 2 -1
fY(y) = Ê n ˆ ÁË 2 ˜¯ y2 e - y /2 , y > 0
GÁ ˜2
Ë 2¯
We know fX,Y(x, y) = fX(x) fY(y)
È 1 Ê 1ˆ 1
n /2 n1
-1 - ˘ È
x
1 Ê 1ˆ 2
n /2 n2
-1 - ˘
y
fX,Y(x, y) = Í Á ˜ x 2 e 2˙Í
Á ˜ ( y ) 2 e 2˙
ÍÎ G (n1 /2) Ë 2 ¯ ˙˚ ÍÎ G (n2 /2) Ë 2 ¯ ˙˚
Let z = x/y and w = y
5.52 Probability Theory and Random Processes
Then x = zw and y = w
∂z ∂z
1 x
∂x ∂y - 1 1
J(x, y) = = y y2 = =
∂w ∂w y w
0 1
∂x ∂y
n1
-1 • Ê n1 + n2 ˆ w
- 1˜ - (1 + z )
(z) 2 ËÁ 2 ¯
=
Ên ˆ Ên ˆ Úw e 2 dw
2( n1 + n2 )/2 G Á 1 ˜ G Á 2 ˜ 0
Ë 2¯ Ë 2¯
Let w (1 + z ) = t
2
2 dt
fi dw =
1+ z
n1
-1 Ê n1 + n2 ˆ
• - 1˜
(z) 2 Ê 2t ˆ ÁË 2 ¯ 2 dt
fZ(z) = Ú ÁË 1 + z ˜¯ e- t
Ên ˆ Ên ˆ 1+ z
2( n1 + n2 )/2 G Á 1 ˜ G Á 2 ˜ 0
Ë 2¯ Ë 2¯
n1
-1 • n1 + n2
(z) 2 -1
= Út 2 e - t dt
Ên ˆ Ên ˆ
G Á 1 ˜ G Á 2 ˜ (1 + z )( n1 + n2 )/2 0
Ë 2¯ Ë 2¯
n1
-1
(z) 2 Ê n + n2 ˆ
GÁ 1
Ë 2 ˜¯
=
Ên ˆ Ên ˆ
G Á 1 ˜ G Á 2 ˜ (1 + z )( n1 + n2 )/2
Ë 2¯ Ë 2¯
Operations on Multiple Random Variables 5.53
n1
-1
(z) 2
=
Ên n ˆ
B Á 1 , 2 ˜ (1 + z )( n1 + n2 )/2
Ë 2 2¯
fi Z is b2 Ê n1 , n2 ˆ random variable
ÁË 2 2 ˜¯
Practice Problem
5.16 If X and Y are two independent chi-square random variables with degrees of freedom n1 and n2, show that
Ên n ˆ
X|(X + Y) is a b1 Á 1 , 2 ˜ variate.
Ë 2 2¯
CORRELATION COEFFICIENT 5.5
The correlation coefficient is a measure of the degree of linearity (similarity) between two random variables
X and Y.
The correlation coefficient of two random variables denoted by rXY is defined as
ÈÊ X - m x ˆ Ê Y - mY ˆ˘
rXY = E ÍÁ ˜ ÁË s ˜¯ ˙ (5.51)
ÎÍË s X ¯ Y ˚˙
X - mX Y - mY
where and are normalized random variables.
sX sY
From Eq. (5.51)
È ( X - m x ) (Y - mY ) ˘ Cov ( X , Y ) s XY
rXY = E Í ˙= =
Î s X Ys ˚ s X Ys s X sY
Cov ( X , Y )
or rXY = (5.52)
Var( X ) Var(Y )
The correlation coefficient has the property that
–1 £ rXY £ 1
We can prove the above equation as follows.
Suppose that X and Y have variances given by sX2 and sY2, respectively.
Then
Ê X Y ˆ
0 £ Var Á +
Ë s X s Y ˜¯
Var( X ) Var (Y ) 2 Cov ( X , Y )
£ + + (5.53)
s X2 s Y2 s XsY
£ 2(1 + rXY)
fi –1 £ rXY (5.54)
5.54 Probability Theory and Random Processes
x x x
rxy is small
y y X and Y are
independent
x x
(d) (e)
Fig. 5.27
Summary
(i) Two random variables are said to be uncorrelated if E[XY] = E[X] E[Y]. Uncorrelated means not
linearly related. In this case, Cov (X, Y) = 0 or rXY = 0.
(ii) Two random variables are said to be orthogonal if E[XY] = 0
Operations on Multiple Random Variables 5.55
(iii) Two random variables X and Y are said to be independent if for every pair (x, y)
fX, Y (x, y) = fX (x) fY(y)
or FX,Y (x, y) = FX (x) FY (y)
(iv) If either X or Y has zero mean then X and Y are uncorrelated fi X and Y orthogonal
(v) X and Y are independent fi X and Y are uncorrelated: X and Y are uncorrelated fi / X and Y are
independent
REVIEW QUESTIONS
8. Define correlation coefficient.
9. Prove that correlation coefficient lies between –1 and 1.
10. What is the condition for orthogonality of random variables?
Solved Problems
5.41 Calculate the correlation coefficient for the following heights (in inches) of fathers X and their sons
Y.
X 65 66 67 67 68 69 70 72
Y 67 68 65 68 72 72 69 71
Cov( X , Y )
Solution The correlation coefficient rXY =
s XsY
where Cov(X, Y) = E[XY] – E[X] E[Y]
Now we find the values in the above equation using the following table:
X Y XY X2 Y2
65 67 4355 4225 4489
66 68 4488 4356 4624
67 65 4355 4489 4225
67 68 4556 4489 4624
68 72 4896 4624 5184
69 72 4968 4761 5184
70 69 4830 4900 4761
72 71 5112 5184 5041
544 552 37560 37028 38132
mX = E[ X ] = SX = 544 = 68
n 8
SY 552
mY = E[Y ] = = = 69
n 8
SX 2
sX2 = E[ X 2 ] - {E[ X ]}2 = - m X2
n
5.56 Probability Theory and Random Processes
37028
= - (68)2 = 4.5 fi s X = 2.121
8
SY 2
sY2 = E[Y 2 ] - {E[Y ]}2 = - mY2
n
38132
= - (69)2 = 5.5 fi s Y = 2.345
8
Cov (X, Y) = E[XY] – E[X] E[Y]
S XY 37560
E[XY] = = = 4695
n 8
Cov(X, Y) = 4695 – 68(69) = 3
Cov( X , Y ) 3
rXY = = = 0.603
s X sY (2.121) (2.345)
5.42 Two random variables X and Y are defined as Y = 4X + 9. Find the correlation coefficient between
X and Y.
Cov ( X , Y ) 4s X2
rXY = = =1
s X sY s X (4s X )
r XY = 1
5.43 Consider two random variables X and Y are independent, zero mean with variance 49 and 25
respectively. Find the correlation coefficient between (X + Y) and (X – Y).
Solution Given mX = 0 and mY = 0
Operations on Multiple Random Variables 5.57
Practice Problem
5.17 If the independent random variables X and Y have the variance 36 and 16 respectively, find the correlation coefficient
between X + Y and X – Y assuming X and Y are zero mean. (Ans. 5/13)
Solved Problem
5.44 For the following bivariate distribution, determine the correlation coefficient.
X 0 1 2 3
Y
5 7
1 - -
48 48
9 5 5
2 -
48 48 24
1 1 5
3 -
12 16 48
14 16 13 5
Solution From the table, PX(X = 0) = , p ( X = 1) = ; p ( X = 2) = ; p ( X = 3) =
48 X 48 X 48 X 48
5.58 Probability Theory and Random Processes
12 24 12
PY(Y = 1) = , pY (Y = 2) = ; pX (Y = 3) =
48 48 48
E[X] = Â xi pX ( xi )
i
Ê 14 ˆ Ê 16 ˆ Ê 13 ˆ Ê 5 ˆ 19
= 0 Á ˜ + 1Á ˜ + 2 Á ˜ + 3 Á ˜ =
Ë 48 ¯ Ë 48 ¯ Ë 48 ¯ Ë 48 ¯ 16
Ê 12 ˆ Ê 24 ˆ Ê 12 ˆ
E[Y] =  y j py ( y j ) = 1ÁË 48 ˜¯ + 2 ÁË 48 ˜¯ + 3 ÁË 48 ˜¯ = 2
Ê 14 ˆ 16 13 5 113
E[X2] = (0)2 Á ˜ + (1)2 + (2)2 + (3)2 =
Ë 48 ¯ 48 48 48 48
33
=
12
2
113 Ê 14 ˆ
sX2 = - Á ˜ = 0.944
48 Ë 16 ¯
216
sY2 = - (2)2 = 0.5
48
Cov(X , Y )
rX,Y =
s X sY
E[ XY ] - E[ X ] E[Y ]
=
s X sY
3/8
= = 0.545
(0.97)(0.707)
Practice Problem
5.18 Find the correlation coefficient rX,Y for the bivariate random variable (X, Y) having the joint pdf
Ï2 xy 0 < x < 1, 0 < y < 1
f X , Y ( x, y ) = Ì (Ans. 0.8)
Ó 0 otherwise
Solved Problems
5.45 If X, Y and Z are uncorrelated random varies with the same variance, find the correlation coefficient
between (X + Y) and (Y + Z).
Operations on Multiple Random Variables 5.59
1 1
PY(Y = 0) = ; PY (Y = 1) =
2 2
3 Ê 5ˆ 1 Ê 1ˆ Ê 1ˆ 1
E[X] =  xi pX ( xi ) = (-1) 8 + 1ÁË 8 ˜¯ = 4 ; E[Y ] =  y j pY ( y j ) = 0 Á ˜ + 1 Á ˜ =
Ë 2¯ Ë 2¯ 2
i j
2 3 5 Ê 1ˆ 1 1
E[X2] = Â xi pX ( xi ) = (-1) + (1)2 = 1; E[Y 2 ] = Â y 2j pY ( y j ) = (0)2 Á ˜ + (1)2 =
2
i 8 8 j
Ë 2¯ 2 2
E[XY] = ÂÂ xi y j pXY ( xi , yi )
j i
Ê 1ˆ 2 3 2
= (-1) (0) Á ˜ + (-1) (1) + (1) (0) + 1(1) = 0
Ë 8¯ 8 8 8
2
Ê 1ˆ 15 15
sX2 = E[ X ] - {E[ X ]} = 1 - Á ˜ = fi sX =
2 2
Ë 4¯ 16 4
2
1 Ê 1ˆ 1 1
sY2 = E[Y 2 ] - {E[Y ]}2 = - Á ˜ = fi sY =
2 Ë 2¯ 4 2
1 Ê 1ˆ 1
Cov (X, Y) = E[XY] – E[X] E[Y] = 0 – Á ˜ =-
Ë
4 2 ¯ 8
Cov ( X , Y ) -1 / 8
r = = = - 0.258
s X sY Ê 15 ˆ Ê 1 ˆ
ÁË ˜
4 ¯ ÁË 2 ˜¯
Show that the covariance is zero even though the two random variables are not independent. (AU 2008)
Solution Let us redraw the table and find marginal functions of X and Y
Y
X –1 0 1 pX(x)
1 1 1
–1 — 0 — —
6 6 3
1 1
0 — 0 0 —
3 3
1 1 1
1 — 0 — —
6 6 3
pY(y) 2 1
— 0 —
3 3
Operations on Multiple Random Variables 5.61
Ê 1ˆ 1 Ê 1ˆ
= (-1) (-1) Á ˜ + (-1) (0)(0) + (-1)(1) + 0(-1) Á ˜
Ë 6¯ 6 Ë 3¯
1 Ê 1ˆ
+ 0(0)(0) + (0) (1) (0) + 1(-1) + 1(0) (0) + 1(1) Á ˜
6 Ë 6¯
1 1 1 1
= - - + =0
6 6 6 6
Ê 1ˆ
Cov (X, Y) = E[ XY ] - E[ X ] E[Y ] = 0 - (0) Á ˜ = 0
Ë 3¯
For independence pX, Y(x y) = pX(x) pY(y)
1 1 2
P[X = –1, Y = –1] = ; P ( X = - 1) = , PY (Y = - 1) =
6 X 3 3
Since
P(X = –1, Y = –1) π PX(X = –1) (PY(Y = –1)
So they are not independent.
5.48 For two random variables X and Y, E[X] = 5
E[Y] = 10; E[XY] = 75, E[X2] = 41; E[Y2] = 149
Find covariance and correlation coefficient.
Solution Given: E[X] = 5; E[Y] = 10; E[XY] = 75; E[X2] = 41 and E[Y2] = 149
Cov (X, Y) = E[XY] – E[X] E[Y]
= 75 – 5 (10) = 25
The correlation coefficient
Cov ( X , Y )
rXY =
s X sY
sX2 = E[X2] – {E[X]}2
= 41 – (5)2 = 16 fi sX = 4
sY2 = E[Y2] – {E[Y]}2 = 149 – (10)2 = 49 fi sY = 7
25 25
r = = = 0.893
4(7) 28
5.62 Probability Theory and Random Processes
2
5.49 For random variables X and Y having mX = 2; mY = 3; sX2 = 9; sY2 = 16 and rXY =
3
Find (i) the covariance of X and Y, (ii) the correlation of X and Y, and (iii) E[X ] and E[Y2].
2
2
Solution Given: mX = 2; mY = 3; sX2 = 9; sY2 = 16 and rXY =
3
Cov ( X , Y )
rXY =
s X sY
fi Cov(X, Y) = rXY sX sY
2
= (3) (4) = 8
3
Cov (X, Y) = E[XY] – E[X] E[Y]
E[XY] = 8 + 2(3) = 16
E[X2] = sX2 + mX2
= 9 + (2)2 = 13
E[Y2] = sY2 + mY2
= 16 + (3)2 = 25
5.50 X and Y are zero-mean random variables with sX2 = 9 and sY2 = 25. Their correlation coefficient
is rXY = –0.6. If W = (aX + 3Y)2 (a) Find a value for the parameter a that minimizes the mean value of W.
(b) Find the minimum value.
Solution Given: W = (aX + 3Y)2; mX = 0, mY = 0, sX2 = E[X2] = 9 and sY2 = E[Y2] = 25
Cov ( X , Y ) Cov ( X , Y )
rXY = =
s X sY 3(5)
Cov (X, Y) = –0.6 (15) = –9
Cov(X, Y) = E[XY] – mX mY
= E[XY] = –9
E[W] = E{(aX + 3Y)2}
= E[a2X2 + 9Y2 + 6aXY]
= a2E[X2] + 9E [Y2] + 6aE[XY]
= 9a2 – 54a + 225 = 0
d
E[W} = 0 fi 18a - 54 = 0, a = 3
dW
E[W ] min = 9(3)2 - 54(3) + 225 = 144
5.51 Two random variable X and Y have means 1 and 2 respectively and variances 4 and 1 respectively.
Their correlation coefficient is 0.4. New random variables W and V are defined as
V = –X + 2Y; W = X + 3Y
Find (a) means, (b) variances, (c) correlations, (d) correlation coefficient of V and W.
Operations on Multiple Random Variables 5.63
Practice Problems
5.19 X, Y and Z are uncorrelated random variables with zero mean and standard deviation 5, 12 and 9 respectively. If U
= X + Y and V = Y + Z, find the correlation coefficient between U and V. (Ans. 48/65)
5.20 Two random variables X and Y are related as Y = 3X + 5. Find the correlation coefficient between X and Y.
(Ans. 1)
5.64 Probability Theory and Random Processes
Solved Problems
E[X] = Ú Ú x f X ,Y ( x, y) dx dy
0 0
•• ••
x -( y + x / y) x -y
= ÚÚ ye dx dy = ÚÚ y e e - x / y dx dy
0 0 0 0
1 -y È ˘
• •
-x/y
= Ú y Í Ú x e dx ˙˙ dy
e Í
0 Î0 ˚
• •
1 -y È
•˘
• 1
-x/y
=
Ú y e ÍÍ- xy e - y 2 e - x / y ˙ dy = Ú ( y 2 ) e - y dy
0 ˙ y
0
0 Î ˚ 0
• •
•
-y
= Úye dy = - y e - y _ e- y =1
0
0 0
E[X] = 1
•• ••
-( y + x / y)
E[Y] = Ú Ú y f X ,Y ( x, y) dx dy = ÚÚe dx dy
0 0 0 0
• • •
-y -x/y È •˘
= Úe Úe dx dy = Ú e - y Í- y e - x / y ˙ dy
0 0 0
Î 0 ˚
• •
• •
-y
= Ú ye dy = - ye - y + Úe
-y
= -e- y =1
0 0
0 0
••
E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy
0 0
••
-( y + x / y)
•
-y
È• - x / y ˘
= ÚÚ x e dx dy = Ú ÍÍ Ú xe dx ˙˙ dy
e
0 0 0 Î0 ˚
•
-y
È -x/y
•
• ˘
= Ú e ÍÍ- xy e + Ú y e - x / y dx ˙ dy
0 Î
0
0 ˙˚
Operations on Multiple Random Variables 5.65
•
-y È -x/y
• •˘
= Úe ÍÎ- xy e - y2 e- x / y ˙˚ dy
0 0
0
• •
• •
2 -y 2 -y
= Ú y e dy = - y e - 2 ye - y + Ú 2 e - y dy
0 0
0 0
•
= -2e -y
= - 2(-1) = 2
0
5.53 The random variables X and Y have a joint density function given by
2 -2 x
e 0 £ x < •, 0 £ y £ x
fX,Y(x, y) = x
0 otherwise
Compute Cov (X, Y).
Solution
•x
E[XY] = Ú Ú xy f X ,Y ( x, y) dy dx
00
•x
2 -2 x
= Ú Ú xy x e dy dx
00
• Èx ˘ •
Ê x2 ˆ
= 2 Ú e -2 x Í Ú y dy ˙ dx = 2 Ú e -2 x Á ˜ dx
0 ÎÍ 0 ˚˙ 0 Ë 2¯
• • • •
2 -2 x - x 2 e -2 x -2 x e -2 x 1 -2 x
= Ú x e dx = 2
-
4
-
2 Ú0
e dx
0 0 0
•
1 e -2 x -1 1
= = [ -1] =
2 -2 4 4
0
•x x
-2 x
E[X] = Ú Ú x f X ,Y ( x, y) dy dx = Ú 2e Ú dy dx
00 0 0
• È • •˘
-2 x Í - x e -2 x e -2 x ˙
= Ú 2 e x dx = 2
Í 2
-
4 ˙
0 Î 0 0 ˚
Ê 1ˆ 1
= 2Á ˜ =
Ë 4¯ 2
5.66 Probability Theory and Random Processes
•x
2 -2 x
•
2 -2 x
Èx ˘
E[Y] = ÚÚy x e dy dx = Úxe Í Ú y dy ˙ dx
00 0 ÎÍ 0 ˚˙
• • • •
2 -2 x Ê x 2 ˆ - x -2 x e -2 x
=
Ú e Á ˜ dx = Ú xe -2 x dx = e -
0
x Ë ¯
2 0
2 0 4
0
1 1
= - (-1) =
4 4
Cov (X, Y) = E[XY] – E[X] E[Y]
1 Ê 1ˆ Ê 1ˆ 1 1 1
= - ÁË 4 ˜¯ = 4 - 8 = 8
4 ÁË 2 ˜¯
1
Cov ( X , Y ) =
8
Solution
• • • •
Correlation = E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy Ú Ú xy d ( x - a ) d ( y - b) dx dy
-• -• -• - •
= ab
= 0.3(–a) (1) + 0.15(2) (2) + 0.15(a) (a) + 0.4(1) (1)
= –0.3a + 0.6 + 0.15a2 + 0.4
= 0.15 a2 – 0.3a + 1
This is RXY = 0.15a2 – 0.3a + 1
d RXY
= 0.3a – 0.3 = 0
da
a =1
At a = 1 RXY = 0.15(1)2 – 0 – 3(1) + 1 = 0.85
X and Y are not orthogonal since RXY π 0.
Practice Problem
Solved Problems
Solution
(a) Given: V = X + aY
W = X – aY
For orthogonality, E[VW] = 0
E[VW] = E[(X + aY) (X – aY)]
= E[X2 – a2Y2] = E[X2] – a2E[Y2]
fi E[X2] – a2 E[Y2] = 0
E[ X 2 ]
|a| =
E[Y 2 ]
m20
In terms of moments, |a| =
m02
(b) If X and Y are Gaussian, V and W are also Gaussian since V and W are linear transformations of X
and Y.
mV = E[V] = E[X] + aE[Y] = mX + amY
mW = E[W] = E[X] – aE[Y] = mX – amY
Cov(V,W) = E[{V – mV}{W – mW}]
= E[{X + aY – mX – amY}{X – aY – mX + amY}]
= E[{(X – mX) + a(Y – mY)}{(X – mX) – a(Y – mY)}]
= E[{X – mX)2 – a2(Y – mY)2]
5.56 Let X1, X2 …, Xn be independent and identically distributed random variables having mean
n s2
mX and variance sX2, and let X = Â Xi /n . Show that (i) E ÈÎ X ˘˚ = m X , (ii) Var ( X ) = X , and
Èn ˘ i =1
n
(c) E ÍÂ ( Xi - X )2 ˙ = (n - 1) s X2 .
ÎÍ i =1 ˚˙
n
Solution Given: X = Â Xi /n
i =1
Èn ˘
E[ X ] = E ÍÂ ( Xi /n)˙
ÎÍ i =1 ˚˙
n n
1
=  E[ Xi ]/n = n  m X
i =1 i =1
1
= (n m x ) = m X
n
E[ X ] = m X
Èn ˘
Var( X ) = Var ÍÂ Xi /n ˙
ÍÎ i =1 ˙˚
1 Èn ˘ 1 n
ns X2 s X2
2 Â Â s X2 =
= Í Var ( Xi )˙ = 2 2
=
n ÎÍ i =1 ˚˙ n i =1 n n
Èn ˘ Èn ˘
E ÍÂ ( Xi - X )2 ˙ = E ÍÂ ( X12 + X 2 - 2 Xi X )˙
ÎÍ i =1 ˙˚ ÍÎ i =1 ˙˚
Èn n Ê n ˆ ˘
= E ÍÂ Xi2 + Â X 2 - 2 Á Â Xi ˜ X 2 ˙
ÍÎ i =1 i =1 Ë i =1 ¯ ˙˚
Èn n ˘ n
= E ÍÂ Xi2 + Â X 2 - 2 (nX ) X ˙ ∵ X = Â X i /n
ÍÎ i =1 i =1 ˙˚ i =1
n
= Â E[ Xi2 ] + E[nX 2 ] - 2 E[nX 2 ]
i =1
 (s X2 + m X2 )- nE[ X 2 ]
n
=
i =1
Ês2 ˆ
( )
= n s X2 + m X2 - n Á X + m X2 ˜
Ë n ¯
Operations on Multiple Random Variables 5.69
5.57 Prove
(a) Cov (a + bX, c + dY) = bd Cov (X, Y)
(b) Cov (X + Y, Z) = Cov (X, Z) + Cov (Y, Z)
Solution
(a) Cov (a + bX, c + dY) = E[{(a + bX) – (a + bmX)}{(c + dY) – (c + dmY)}]
= E[(a + bX) (c + dY) – (a + bmX)(c + dY) –(c + dmY) (a + bX) + (a + bmX) (c + dmY)]
= E[bd XY – bdmXY – bd XmY + bd mX mY)
= bd E[(X – mX) (Y – mY)]
= bd Cov(X, Y)
(b) Cov (X + Y, Z) = E[(X + Y – mX – mY) (Z – mZ)]
= E[XZ + YZ – mXZ – mYZ – XmZ – YmZ + mX mZ + mY mZ]
= E[Z(X – mX) – mX(X – mX) + Z(Y – mY) – mZ (Y – mY)]
5.58 If X and Y are identically distributed, not necessarily independent, show Cov(X + Y, X – Y) = 0.
Solution
Cov (X + Y, X – Y) = E [{(X + Y) – (mX + mY)}{(X – Y – mX + mY)}]
= E [{(X – mX) + (Y – mY)}{(X – mX) – (Y – mY)}]
= E [(X – mX)2 – (Y – mY)2]
= E{(X – mX)2] – E[(Y – mY)2]
= sX2 – sY2 = 0
since X and Y are identically distributed.
Practice Problems
Solved Problem
5.59 A random voltage V ~ U(0, 6 V) is applied to a resistor whose resistance R is a binary random
variable on a value of either 5 W or 10 W with equal probability and is independent of V.
(a) Find the expected power dissipated by R
(b) Rvp, Rrp and s2vp.
1 1
fR(r) = d (r - 5) + d (r - 10)
2 2
• 6 6
2 Ê 1ˆ 1 v3 (6)3
E[V2] = Ú v fv (v) dV = Ú v ÁË 6 ˜¯ dv = 6 3
2
= = 12
-• 0
6¥3
0
È1˘ 2 1 1 Ê 1ˆ Ê 1 ˆ Ê 1ˆ 2 + 1 3
E Í ˙ = Â P{R = ri } = Á ˜ + Á ˜ Á ˜ = =
Î R ˚ i =1 ri 5 Ë 2 ¯ Ë 10 ¯ Ë 2 ¯ 20 20
1
Since V and R are independent V2 and are independent and uncorrelated,
R
È1˘ Ê 3ˆ
E[P] = E[V2/R] = E[V2] E Í ˙ = 12 Á ˜ = 1.8 W
ÎR˚ Ë 20 ¯
È1˘
Rvp = E[VP] = E[V ◊ V 2 /R] = E Í ˙ E[V 3 ]
ÎR˚
6 6
3 3 Ê 1ˆ 3 1 v4
=
20
E[V 3 ] = Ú Á ˜
20 0 Ë 6 ¯
v dV =
40 4
= 8.1
0
6 6
1 v2
E[V] = Ú v fV (v) dV = 6 2
=3
0 0
Practice Problems
Solved Problem
11 11
3 3
Ú Ú xy 2 ( x + y 2 ) dx dy = Ú Ú xy ( x + y 2 ) dx dy
2 2
=
00
2 00
Ï1 È Ê 1ˆ Ê 2 1ˆ ˘ Ï1 ¸
3 Ô Í x4 ˙ dy = 3 Ô Ê y + y ˆ dy Ô
3
x
= ÌÚ y Á ˜+y Á 3
˜ Ì Ú Á ˜ ˝
2 Ô0 Í Á 4 ˜ ÁË 2 ˜ ˙ 2 ÔÓ 0 Ë 4 2 ¯ Ô˛
Ó ÎÍ Ë 0¯ 0¯ ˚˙
È 1 1˘
3 y2 y4 ˙ 3 È 1 1 ˘ 3
= Í + = + =
2Í8 8 ˙ 2 ÍÎ 8 8 ˙˚ 8
Î 0 0˚
5.72 Probability Theory and Random Processes
3 Ê 2 1 ˆ (3 x + 1) 2
=
2 ÁË x + 3 ˜¯ = 2
The marginal pdf of Y is given by
• 1
3
fY(y) = Ú f X ,Y ( x, y) dx = Ú ( x 2 + y 2 ) dx
2
-• 0
3 Ê1 2ˆ (3 y 2 + 1)
= + y =
2 ËÁ 3 ˜¯ 2
• 1
(3 x 2 + 1)
E[X] = Ú x f X ( x ) dx = Ú x
2
dx
-• 0
È 1
1˘
1 Í x4 x2 ˙ 1 È 3 1 ˘ 5
= Í3 + = + =
2 4 2 ˙ 2 ÍÎ 4 2 ˙˚ 8
Í 0 ˙
Î 0˚
• 1
(3 y 2 + 1)
E[Y] = Ú y fY ( y) dy = Ú y
2
dy
-• 0
È 1
1˘
1 Í y4 y2 ˙ 1 È 3 1 ˘ 5
= Í3 + = + =
2 4 2 ˙ 2 ÍÎ 4 2 ˙˚ 8
Í 0 ˙
Î 0˚
Úx
2
E[X] = f X ( x ) dx
-•
1
Ê 3 x 2 + 1ˆ È 5 1 1˘
1 x x3 ˙
Ú x ÁË 2 ˜¯ dx = 2 ÍÍ3 5 +
= 2
3 ˙
0 Î 0 0˚
1 È3 1˘ 7
+ =
2 ÍÎ 5 3 ˙˚ 15
=
1
Ê 3 y2 + 1ˆ 7
E[Y2] = Úy Á 2 ˜ dy = 15
2
Similarly,
0 Ë ¯
Operations on Multiple Random Variables 5.73
2
7 Ê 5ˆ 7 25 73
Var(X) = E[ X 2 ] - {E[ X ]}2 = - = - =
15 ÁË 8 ˜¯ 15 64 960
2
7 Ê 5ˆ 73
Var(Y) = E[Y 2 ] - {E[Y ]}2 = - =
15 ÁË 8 ˜¯ 960
sX = sY = 73
= 0.2757
960
Cov ( X , Y ) (-1/64) (-1 / 64)
rXY = = = = –0.20547
s X sY 73 73 (73 / 960)
960 960
Practice Problems
5.27 If X and Y are independent random variables and U and V are defined by U = X cos q + Y sin q, V = Y cos q – X sin
q, show that the coefficient of correlation between X and Y is given by
(s Y2 - s X2 ) sin 2 q
(s Y2 - s X2 ) sin 2 2q + 4s Y2 s 2X
5.28 The continuous random variables X and Y have the pdf fX,Y(x, y) = e–(x + y) 0 < x < •, 0 < y, •. Find the rXY.
(Ans. zero)
1 È 1 ÏÔ ( x - m X )2 ( y - mY )2 2 r( x - m X )( y - mY ) ¸Ô˘
fX,Y(x, y) = exp Í- Ì + + ˝˙
ÎÍ 2(1 - r XY ) ÓÔ s X
2 2
2p s X s Y 1 - r XY
2 s Y2 s X sY ˛Ô˚˙
(5.57)
where mX, mY, sX2, sY2 and rXY are the mean, variance and Correlation coefficient of X and Y. The Equation
(5.57) also known as bivariate Gaussian density which is denoted by the shorthand
(X, Y) ~ N(mX, mY, sX2, rXY) (5.58)
The pdf of jointly Gaussian random variables for different correlation coefficients are shown in Fig. 5.28.
Note that if rXY = 0, then Eq. (5.57) can be written as
ÔÏ 1 È ( x - m X ) ( y - mY )2 ˘ Ô¸
2
1
fX,Y(x, y) = exp Ì Í 2
+ ˙˝
2ps X s Y ÔÓ 2 ÍÎ s X s Y2 ˙˚ Ô˛
Ï Ê 2¸ Ï Ê 2¸
1 Ô x - mX ˆ Ô Ô y - mY ˆ Ô
= exp Ì- Á ˜ ˝ exp Ì- Á ˜ ˝
2ps X s Y ÔÓ Ë 2 s X ¯ Ô˛ ÔÓ Ë 2 s Y ¯ Ô˛
5.74 Probability Theory and Random Processes
1 ÏÔ ( x - m X ) 2 ¸Ô 1 ÏÔ ( y - mY ) 2 ¸Ô
= exp Ì- ˝ exp Ì- ˝
2ps X2 ÓÔ 2 s X2 ˛Ô 2ps Y
2
ÔÓ 2 s Y2 ˛Ô
= fX(x) fY(y)
1 2
/2s X2
where fX(x) = e-( x - m X )
2p s X2
1 2
/2s Y2
and fY(y) = e - ( y - mY )
2p s Y2
From the above discussion, we can find that any uncorrelated Gaussian random variables are also
statistically independent.
The loci of constant values of the Gaussian pdf are ellipse; since the following equation describes an
ellipse
( x - m X )2 ( y - mY )2 2 r XY ( x - m X ) ( y - mY )
+ + = r2 (5.59)
s X2 s Y2 s X sY
0.16
0.14
0.12
0.1
fxy(x, y)
0.08
0.06
0.04
0.02
Y –2 4 5
2 3
0 1
–2 –1
–4 –4 –3
–5 x
(a)
(Continued)
Operations on Multiple Random Variables 5.75
0.4
0.35
0.3
0.25
fx(x, y)
0.2
0.15
0.1
0.05
0
4
–2
Y
3 4 5
0 1 2
–4 –3 –2 –1
–5 –4
x
(b)
0.4
0.35
0.3
0.25
fxy(x, y)
0.2
0.15
0.1
0.05
0
4
Y –2
4 5
2 3
0 1
–4 –2 –1
–4 –3
–5 x
(c)
Fig.5.28 The joint density function of normal random variable with (a) r = 0 (b) r=1 (c) r= –1
5.76 Probability Theory and Random Processes
Cij = E È( Xi - m X ) ( X j - m X )˘
Î i j ˚
= s X2 i i= j
= C Xi X j i π j
Let us consider two Gaussian random variables X1 and X2, Then the matrix
È X1 - m X1 ˘
[X – m] = Í ˙
ÎÍ X 2 - m X2 ˚˙
and the covariance matrix
È s X2 i Cov( X1 , X 2 )˘
C= Í ˙
ÍCov ( X , X ) s 2 ˙
Î 1 2 X2 ˚
È s X2 1 r X1 X2 s X1 s X2 ˘
= Í ˙
Ír s X2 2 ˙
Î X1 X2 s X1 s X2 ˚
Cov ( X1 , Y2 )
where r X1 X2 =
s X1 s X 2
The determinant of C is
|C| = s X1 s X2 - r X1 X2 s X1 s X2
2 2 2 2 2
= (1 - r X2 1 X2 ) s X2 1 s X2 2
The inverse of the matrix
Adj C
C –1 =
|C |
È s X2 2 - r X1 , X2 s X1 s X2 ˘
Adj C = Í ˙
Í- r s X2 1 ˙
Î X1 , X2 s X1 s X2 ˚
Operations on Multiple Random Variables 5.77
È 1 -r ˘
Í 2 s X1 s X2 ˙
Adj 1 Í s X1 ˙
C –1 = = Í -r
| C | 1 - r X2 X 1 ˙
1 2 Í ˙
Í s X1 s X2 s X2 2 ˙˚
Î
Consider the term
È 1 -r ˘
Í 2 s X1 s X2 ˙ È x1 - m X ˘
1 È x1 - m x ˘ Í s X1 ˙
(X – mX)T C –1(X – mX) = x2 - m X 2 ˚ Í Í ˙
1
(1 - r X2 1 X2 ) Î 1
-r 1 ˙ ÎÍ x2 - m X2 ˚˙
Í ˙
Í s X1 s X2 s X2 2 ˙˚
Î
1 È ( x1 - m X )2 2 r( x1 - m X )( x2 - m X ) ( x2 - m X )2 ˘
= Í 1
- 1 2
+ 2
˙
1 - r X2 1 X2 Í s 2 s X1 s X2 s 2
˙
Î X 1 X 2 ˚
1 È ( x1 - m X )2 2 r( x1 - m X ) ( x2 - mY ) ( x2 - m X )2 ˘
fX (x , x ) = exp Í 1
- 1 2
+ 2
˙
1,X2 1 2 2p | C |1/2 Í s X1 2 s X1 s X2 s 2
˙
Î X2 ˚
1 È ( x1 - m X )2 2 r( x1 - mY ) ( x2 - mY ) ( x2 - mY )2 ˘
= exp Í 1
- 1 2
+ 2
˙
2p 1 - r X2 1 X2 s X1 s X2 Í s X1 2 s X1 s X2 s 2
˙
Î X2 ˚
(5.61)
REVIEW QUESTIONS
11. What are the properties of Gaussian random variables?
12. Give expression for joint pdf of two Gaussian random variables
5.78 Probability Theory and Random Processes
Solved Problem
5.61 The joint density function two Gaussian random variables X and Y is
1 ÏÔ 1 È ( x - m X )2 2 r( x - m X )( y - mY ) ( y - mY )2 ˘ ¸Ô
fX,Y(x, y) = exp Ì- Í + + ˙˝
ÓÔ 2p (1 - r )
2 2
2p s xs Y 1- r 2
ÍÎ s X s XsY s Y2 ˙˚ ˛Ô
Find the conditional density function fX(x|Y = y) and fY(y/X = x) and show that they are also Gaussian.
Solution
f X , Y ( x, y )
fX(x|Y = y) =
fY ( y)
1 ÏÔ 1 È ( x - m X )2 2 r( x - m X ) ( y - mY ) ( y - mY )2 ˘ ¸Ô
exp Ì- Í + + ˙˝
ÔÓ 2(1 - r )
2 2
2ps X s Y (1 - r )
2
ÍÎ s X s XsY s Y2 ˙˚ Ô˛
=
1 ÏÔ ( y - mY ) ¸Ô
2
exp Ì- ˝
2p s Y ÔÓ 2 s Y2 Ô˛
ÔÏ È ( x - m X ) 2 r( x - m X ) ( y - mY ) ( y - mY )2 ( y - mY )2 ˘ Ô¸
2
1
= exp Ì- Í 2 + + - ˙˝
ÔÓ ÍÎ 2s X (1 - r ) 2s X s Y (1 - r 2 ) 2s Y2 (1 - r 2 )
2
sX 2p (1 - r 2 ) 2s Y2 ˙˚ Ô˛
1 ÏÔ È ( x - m )2 s 2 + 2 r s s ( x - m ) ( y - m ) + ( y - m )2 s 2 r 2 ˘ ¸Ô
= exp Ì- Í X Y X Y X Y Y X
˙˝
sX 2p (1 - r )
2
Í
ÓÔ Î 2s 2
s
X Y
2
(1 - r 2
) ˙˚ Ô˛
Ï sX s2 ¸
Ô ( x - m X ) + 2r ( x - m X ) ( y - mY ) + ( y - mY )2 X2 r 2 Ô
2
1 Ô sY sY Ô
= exp Ì- ˝
sX 2p (1 - r )
2
Ô 2s X (1 - r )
2 2
Ô
Ô Ô
Ó ˛
Ï Ê ( y - mY ) ˆ ¸
2
Ô x - mX - s r Ô
Ô ÁË ˜¯ Ô
X
1 sY
= exp Ì- ˝
sX 2p (1 - r 2 ) Ô 2s X2 (1 - r 2 ) Ô
Ô Ô
Ó ˛
Ï È Ê ˆ ˘¸
2
Ô Íx - m + r s X (y - m ) ˙ Ô
Ô Í ÁË X sY Y ˜
¯ ˙ ÔÔ
1 Ô Î ˚
= exp Ì- ˝
sX 2p (1 - r 2 ) Ô 2s X2 (1 - r 2 ) Ô
Ô Ô
ÔÓ Ô˛
Operations on Multiple Random Variables 5.79
sX
The above equation represents Gaussian pdf with mean mX + r ( y - mY ) and variance s X2 (1 - r 2 )
sY
Similarly,
f X , Y ( x, y )
fY(y|X = x) =
fX ( x)
1 ÏÔ -1 È ( x - m X )2 2 r( x - m X )( y - mY ) ( y - mY )2 ˘ ¸Ô
exp Ì 2 Í
- + ˙˝
2p s X s Y (1 - r 2 ) ÔÓ 2((1 - r ) ÎÍ s X
2 s X sY s Y2 ˚˙ Ô˛
=
ÔÏ ( x - m X ) Ô¸
2
1
exp Ì- ˝
2p s X ÓÔ 2 s X2 ˛Ô
Ï È s ˘ ¸
2
Ô Í y - ( mY + r Y ( x - m X )˙ Ô
Ô sX ˚ Ô
exp Ì- Î
1
= ˝
sY 2p (1 - r )
2
Ô 2 s 2
Y (1 - r 2
) Ô
Ô Ô
Ó ˛
sY
The above equation represents a Gaussian PDF with mean mY + r ( x - m X ) and variance s Y2 (1 - r 2 )
sX
Practice Problem
5.29 If X and Y are jointly Gaussian distributed random variables with PDF
1 ÏÔ 1 ( y - mY )2 2 r ( x - m X )( y - mY ) ¸Ô
fX,Y(x, y) = 2
exp Ì- 2
+ - ˝
2p s X s Y (1 - r ) ÔÓ 2(1 - r ) s Y2 s X sY Ô˛
s
(a) Show that the conditional distribution of Y given X = x, is Gaussian with mean mY = r Y ( x - m X ) and
2 2
variance sY (1 – r ) s X
Solved Problems
5.62 If z is a standard Gaussian random variable and Y is defined by Y = a + bz + cz2. Show that
b
rYZ =
b + 2c 2
2
and Y = a + bz + cz2
5.80 Probability Theory and Random Processes
5.63 Given Z = (3X + bY)2 where X and Y are zero mean random variables with variance sX2 = 25 and
sY2 = 9. Their correlation coefficient is r = –0.6.
(a) Find a value for the parameter a that minimizes the mean value of Z.
(b) Find the minimum mean value.
Cov( X , Y )
Also r = = –0.6
s XsY
fi RXY = –0.6(5)(3) = –9
• •
ÏÔ0.15 d ( x + 1) d ( y) + 0.1 d ( x ) d ( y) + 0.1 d ( x ) d ( y - 2)
= Ú Ú xy ÌÔ+ 0.4 d ( x - 1) d ( y + 2) + 0.2 d ( x - 1) d ( y - 1) + 0.05 d ( x - 1) d ( y - 3)}dx dy
-• -• Ó
•
Using Ú xy d ( x - k1 ) d ( x - k2 ) dx dy = k1k2
-•
RXY = 0.9
• • •
E[X] = m10 = Ú Ú x f X ,Y ( x, y) dx dy Ú d ( y - k ) dy = 1
-• -• -•
5.82 Probability Theory and Random Processes
• •
x {0.15 d ( x + 1) d ( y) + 0.1 d ( x ) d ( y) + 0.1 d ( x ) d ( y - 2)
= Ú Ú + 0.4 d ( x - 1) d ( y + 2) + 0.2 d ( x - 1) d ( y - 1) + 0.05d ( x - 1) d ( x - 3)}dx dy
-• -•
• •
E[Y] = m01 = Ú Ú y f X ,Y ( x, y) dx dy
-• -•
• •
{y 0.15 d ( x + 1) d ( y) + 0.1 d ( x ) d ( y) + 0.1 d ( x ) d ( y - 2) + 0.4 d ( x - 1) d ( y + 2)
= Ú Ú + 0.2 d ( x - 1) d ( y - 1) + 0.05 d ( x - 1) d ( y - 3)}dx dy
-• -•
Ú d ( x - k )dx = 1
-•
= 0.15 (0) + 0.1(0) + 0.1(2) + 0.4(–2) + 0.2(1) + 0.05(3)
= –0.25
Cov ( X , Y )
The correlation coefficient rXY =
s X sY
• •
E[X2] = m20 = Ú Úx
2
f X ,Y ( x, y) dx dy
-• -•
• •
x 2 {0.15 d ( x + 1) d ( y) + 0.1 d ( x ) d ( y) + 0.1 d ( x ) d ( y - 2) + 0.4 d ( x - 1) d ( y + 2)
=Ú Ú
-• -• + 0.2 d ( x - 1) d ( y - 1) + 0.05 d ( x - 1) d ( y - 3)}dx dy
• •
E[Y2] = m02 = Ú Úy
2
f X ,Y ( x, y) dx dy
-• -•
= 2 + (3)2 = 11
E[X2] = 11
E[Y ] = 4
5.66 Consider two random variables X and Y such that Y = –3X + 15, the mean value and the variance of
X are 5 and 3 respectively. Find the covariance.
= –3(5) + 15 = 0
= 3 + (5)2 = 28
= –3E[X2] + 15E[X]
5.67 (a) For the random variable X and Y with joint density function given in the solved problem find the
second-order moments of X and Y.
(b) Find the variables of X and Y.
(c) Find the correlation coefficient.
Solution Given:
Ï( x + y)2 / 40 -1 £ x £ 1, –3 £ y £ 3
fX,Y(x, y) = ÔÌ
ÔÓ 0 elsewhere
The second-order moments are m20, m11, and m02
• •
mnk = E[XmYk] = Ú Úx
n
y k f X ,Y ( x, y) dx dy
-• -•
3 1
m20 = E[X2] = ÚÚx
2
( x + y)2 /40 dx dy
-3 -1
3 1
1
ÚÚx ( x 2 + 2 xy + y 2 ) dx dy
2
=
40 -3 -1
3 1
1
Ú Ú (x + 2 x 3 y + x 2 y 2 ) dx dy
4
=
40 -3 -1
Ï 1
1
1 ¸
1 ÔÔ x 5 ÔÔ
3 4 3
x x
40 -Ú3 Ô 5
= Ì +y +y 2
˝ dy
2 3 Ô
-1
-1
ÔÓ -1 Ô˛
3
1 Ê2 2 2ˆ
=
40 Ú ÁË 5 + y(0) + 3 y ˜¯ dy
-3
Ï 3 3
3 ¸
= 1 ÔÌ 2 y + 2 y Ô
˝
40 Ô 5 -3 3 3
Ó -3 Ô
˛
1 Ï 12 ¸ 1 72 9
= Ì + 12 ˝ = ◊ =
40 Ó5 ˛ 40 5 25
3 1
( x + y )2
m11 = E[XY] = ÚÚ xy
40
dx dy
-3 -1
3 1
xy
Ú Ú 40 ( x + y 2 + 2 xy) dx dy
2
=
-3 -1
Operations on Multiple Random Variables 5.85
3 1
1
Ú Ú (x y + xy3 + 2 x 2 y 2 ) dx dy
3
=
40 -3 -1
3 Ï 4 1 1 1 ¸
1 Ô x x2 2 2 3 Ô
Ú Ìy 4 +y +
= 3
y x ˝ dy
40 -3 Ô
2 3 -1 Ô
Ó -1 -1 ˛
3 3
1 4 2 4
Ú 3 y dy = 40(9) y = 0.6
3
=
40 -3 -3
3 1
( x + y )2
m02 = E[Y2] = ÚÚ y2 dx dy
-3 -1
40
3 1 3 1
1 1
ÚÚy ( x 2 + y 2 + 2 xy) dx dy = Ú Ú (y x + y 4 + 2 xy3 ) dx dy
2 2 2
=
40 -3 -1
40 -3 -1
3 Ï 3
1 1 ¸
1 Ô 2 x 1 x2 Ô
=
40 Ú Ìy 3 + y4 x
-1
+ 2 y3
2
˝ dy
-3 Ô
Ó -1 -1 Ô
˛
3 Ï 3 3 ¸
1 Ê2 2 4ˆ 1 Ô 2 y3 2 5 Ô
=
40 Ú ÁË 3 y + 2 y ˜¯ dy = 40 Ì 3 3 +
5
y ˝
-3 ÔÓ -3 -3 Ô
˛
1
= (12 + 194.4) = 5.16
40
sX2 = E[X2] – {E[X]}2
3 1 3 1
( x + y )2 1
Ú Ú dx dy = Ú Ú (x + xy 2 + 2 x 2 y) dx dy
3
E[X] = m10 = x
-3 -1
40 40 -3 -1
3 3
1 4 1 y2
=
40 Ú 3 y dy = 30 2
=0
-3 -3
Similarly,
sY2 = E[Y2] – {E[Y]}2
3 1
( x + y )2
E[Y] = m01 = ÚÚ y
40
dx dy
-3 -1
3 1
1
Ú Ú (x y + y3 + xy 2 ) dx dy
2
=
40 -3 -1
3
1 Ê2 3ˆ
=
40 Ú ÁË 3 y + 2 y ˜¯ dy = 0
-3
5.86 Probability Theory and Random Processes
E[ XY ] m11
= 2 2
E[ X ] E[Y ] m20 m02
0.6
= = 0.44
0.36 (5.16)
Cov ( X , Y )
Solution The correlation coefficient rxy =
s X sY
Cov(X, Y) = E[XY] – E[X] E[Y]
X Y XY X2 Y2
10 18 180 100 324
14 12 168 196 144
18 24 432 324 576
22 6 132 484 36
26 30 780 676 900
30 36 1080 900 1296
120 126 2772 2680 3276
mX = E[X] =
 X = 120 = 20
n 6
mY = E[Y] = ÂY =
126
= 21
n 6
E[XY] = Â XY =
2772
= 462
n 6
sX2 = E[X2] – mX2; sY2 = E[Y2] – mY2
SX 2 2680
E[X2] = = = 446.667
n 6
Operations on Multiple Random Variables 5.87
SY 2 3276
E[Y2] = = = 546
n 6
sX2 = (446.667) – (20)2 = 46.667 fi sX = 6.8313
sY2 = (546) – (21)2 = 105 fi sY = 10.247
E[ XY ] - m X mY
r =
s X sY
462 - 20(21)
=
(6.8313) (10.247)
= 0.6
5.69 Two random variables X and Y have the joint probability density function given by
Ïk (1 - x 2 y) 0 £ x £ 1, 0 £ y £ 1
fX,Y(x, y) = ÔÌ
ÔÓ 0 otherwise
(a) Find the value of k.
(b) Obtain the marginal probability density function of X and Y.
(c) Find the correlation coefficient between X and Y.
Solution • •
(a) We know Ú Ú f X ,Y ( x, y)dx dy = 1
-• -•
11
fi Ú Ú k (1 - x y) dx dy = 1
2
00
1Ï 1 ¸ 1
Ô 1 x3 Ô Ê yˆ
k Ú Ìx 0 - y ˝ dy = k Ú ÁË1 - 3 ˜¯ dy
0Ô
3
Ó 0Ô
˛ 0
È 1˘
1 y2 ˙ È 1˘ Ê 5ˆ
= k Íy 1 - = k Í1 - = kÁ ˜
Í 0 3 2 ˙ Î 6 ˙˚ Ë 6¯
Î 0˚
Ê 5ˆ 6
= kÁ ˜ =1fi k =
Ë 6¯ 5
(b) The marginal density function of X is
1
6
fX(x) = Ú f X ,Y ( x, y) dy = Ú
5
(1 - x 2 y) dy
0
ÈÊ 1˘
Í y - x2 y ˆ ˙ = 6 Ê1 - x ˆ ; 0 £ x £ 1
2 2
6
=
5 ÍÁË 2 ˜¯ ˙ 5 ÁË 2 ˜¯
ÍÎ 0˙
˚
5.88 Probability Theory and Random Processes
1
6 Ê x3 ˆ 6Ê yˆ
= Áx - y 3 ˜ =
5 ÁË
1- ˜;0 £ y £1
3¯
5 Ë ¯ 0
È 1 1˘
6 Ê x3 ˆ
1
6 Í x2 x4 ˙
5 Ú0 ÁË
= x - dx = -
2 ˜¯ 5Í2 8 ˙
Î 0 0˚
6 È1 1˘ 9
= - =
5 ÍÎ 2 8 ˙˚ 20
• 1
6Ê yˆ
E[Y] = Ú yfY ( y) dy = Ú y 5 ÁË1 - 3 ˜¯ dy
-• 0
•
Ê È 1 1˘
6 y2 ˆ 6 Í y2 y3 ˙
= Ú Á 3 ˜¯
y - dy =
5Í2
-
9 ˙
-• Ë
5
Î 0 0˚
6 È1 1˘ 7
= Í 2 - 9 ˙ = 15
5 Î ˚
• •
E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy
-• •
11 11
6 6
Ú Ú xy (1 - x y) dx dy = Ú Ú ( xy - x
2 3 2
= y ) dx dy
5 00
5 00
6 Ê y y2 ˆ
1
6 È1 1 ˘ 1
= Ú Á
5 0Ë2
- ˜ dy = Í - ˙ =
4¯ 5 Î 4 12 ˚ 5
Ê È 1˘
x2 ˆ 6 ÍÊ x 3 x 5 ˆ ˙ 6 È 1 1 ˘
1
6
E[X ] = Ú x Á 1 - ˜ dx =
2 2
- = -
50 Ë 2¯ 5 ÍÁË 3 10 ˜¯ ˙ 5 ÍÎ 3 10 ˙˚
ÎÍ 0˙
˚
Operations on Multiple Random Variables 5.89
6È7˘ 7
= =
5 ÍÎ 30 ˙˚ 25
È 1 ˘
6 2Ê yˆ 6 y3 y4 ˙ 6 È 1 1 ˘ 3
E[Y2] = Ú y Á 1 - ˜ dy = Í - = - =
5 Ë 3¯ 5Í3 12 ˙ 5 ÍÎ 3 12 ˙˚ 10
Î 0 ˚
2
7 Ê 9ˆ 31 31
sX2 = E[X2] – {E[X]}2 = - = fi sX =
25 ÁË 20 ˜¯ 400 20
2
sY2 = E[Y2] – {E[Y]}2 = 3 - ÊÁ 7 ˆ˜ = 37 fi s Y = 37
10 Ë 15 ¯ 450 450
Cov(X, Y) = E[XY} – E[X] E[Y]
1 9 Ê 7 ˆ -3
- = = - 0.01
5 20 ÁË 15 ˜¯ 300
=
Cov ( X , Y ) 0.01
rXY = =- = - 0.13
s X sY Ê 31 ˆ Ê 37 ˆ
Á 20 ˜ Á 450 ˜
Ë ¯Ë ¯
5.70 Two random variables X and Y are related by the expression Y = aX + b, where a and b are
constants.
(a) Show that the correlation coefficient is
r = 1 for a > 0, any b
= –1 for a < 0, any b
(b) Show that the covariance is CXY = a sX2, where sX2 is the variance of X.
Given: Y = aX + b
= aE[X2] + bE[X]
E[Y] = aE[X] + b
E[X] E[Y] = E[X] {aE[X] + b}
= a{E[X]}2 + bE[X]
Cov(X,Y) = aE[X2] + bE[X] – a{E[X]}2 – bE[X]
5.90 Probability Theory and Random Processes
= asX2
sY2 = E[(aX + b)2] – {E[aX + b]}2
=E[a2X2 + 2abX + b2] – {aE[X] + b}2
= a2E[X2] + 2abE[X] + b2 – a2{E[X}2 – b2 – 2 abE[X] = a2{E[X2] – (E[X]2)}
= a2sX2
fi sY = |a| sX
Cov ( X , Y ) as X2 a
r = = 2
=
s X sY |a| s X | a|
If a is positive, r = 1
and if a is negative, r = –1
and C XY = as X2
5.71 X and Y are the random variables with variance sX and sY respectively. Find the value of k if U =
s
X + KY and v = X + X Y are uncorrelated.
sY
sX Ê s ˆ
fi E[X2] – {E[X]}2 + K È E[Y 2 ] - {E[Y ]}2 ˘ + K + X {E[ XY ] - E[ X ] E[Y ]}= 0
sY Î ˚ ÁË s Y ˜¯
sX 2 Ê s ˆ
fi sX2 + K s + K + X ˜ Cov ( X , Y ) = 0
s Y Y ÁË sY ¯
Ê s ˆ
sX2 + KsXsY + Á K + X ˜ Cov (X, Y) = 0
Ë sY ¯
Divide throughout with sX, we get
(s X + Ks Y )
(sX + KsY) + Cov(X, Y) = 0
s X sY
È Cov ( X , Y ) ˘
(sX + KsY) Í1 + ˙=0
Î s X sY ˚
sX
From which K = -
sY
Solution Given:
2
+ y2 )/2s 2
e-( x
fX,Y(x, y) =
2p s 2
and g(X, Y) = X2 + Y2
• •
E[g(X,Y)] = Ú Ú g( X , Y ) f X ,Y ( x, y) dx dy
-• -•
• • 2
+ y2 )/2s 2
e-( x
= Ú Ú ( x 2 + y2 )
2p s 2
dx dy
-• -•
5.92 Probability Theory and Random Processes
• • ÏÔ x 2 ◊ e - ( x 2 + y2 )/2s 2 2 2
y 2 e - ( x + y )/2s
2
¸Ô
= Ú ÚÌ + ˝ dx dy
-• -• ÔÓ 2p s 2 2p s 2 Ô˛
• • • 2
/2s 2 • 2
/2s 2
e- x y2 e- y
2
/2s 2 2
/2s 2
x 2 ◊ e- x e- y
= Ú dx Ú dy + Ú dx Ú dy
-• 2p s 2 -• 2p s 2 -• 2p s 2 -• 2p s 2
2
/2s 2
e- x
The pdf of a Gaussian random variable with zero mean and s2 variance is . Therefore,
2p s 2
• 2 2
•
e- x
2
/2s 2
e- y
/2s
Ú dx = 1 . Similarly Ú =1
-• 2p s 2 -• 2p s 2
• 2
/2s 2
e- x
Úx
2 2
Also, E[X ] = dx
-• 2p s 2
• 2
/2s 2
y2 e- x
and E[Y2] = Ú dy
-• 2p s 2
Since mean is zero E[X2] = sX2 and E[Y2] = sY2 Substituting all values in the equation, we get
E[g(X,Y)] = sX2 + sY2
Since sX2 = sY2 = s2
E[g(X,Y)] = 2s 2
Solution Given:
Ï xy
Ô 0 < x < 4, 1 < y < 5
fX,Y(x, y) = Ì 96
Ô0 otherwise
Ó
The marginal distribution function of X is given by
• 5 5
xy x y2
fX(x) = Ú f X ,Y ( x, y) dy = Ú
96
dy =
96 2
-• 1 1
x x
=
96
(12) = 8 , 0 < x < 4
Operations on Multiple Random Variables 5.93
x
fX ( x) = ;0< x<4
8
• 4 4
xy y x2 y
fY(y) = Ú f X ,Y ( x, y) dx = Ú 96 dx = 96 2 =
12
-• 0 0
y
fY ( y) = ,1< y < 5
12
• 4 4
Ê xˆ 1 x3 8
E[X] = Ú x f X ( x) dx = Ú x ÁË 8 ˜¯ dx = 8 3 =
3
-• 0 0
• 5 5
Ê yˆ 1 y3 31
E[Y] =
Ú Y
y f ( y ) dy = Ú ÁË 12 ˜¯
y dy =
12 3
=
9
-• 1 1
• •
E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy
-• -•
54 54
Ê xy ˆ 1
Ú Ú xy ÁË 96 ˜¯ dx dy = 96 ÚÚx
2 2
= y dx dy
10 10
1
5 ÏÔ 4 ¸Ô 1
5
Ê 64 ˆ 2 2
5
Ú y ÌÚ x dx ˝ dy = Úy ÁË 3 ˜¯ dy = 9 Ú y dy
2 2 2
=
96 1 ÓÔ 0 ˛Ô 96 1 1
5
2 y3 2 Ê 124 ˆ 248
= = =
9 3 9 ÁË 3 ˜¯ 27
1
Ê 8ˆ Ê 31ˆ 16 31 47
E[2X + 3Y] = 2E[X] + 3E[Y] = 2 Á ˜ + 3 Á ˜ = + =
Ë 3¯ Ë 9¯ 3 3 3
Var(X) = E[X2] – {E[X]}2
• 4 4
2 Ê xˆ 1 x4 1 È 256 ˘
E[X2] = Ú x f X ( x) dx = Ú x ÁË 8 ˜¯ dx = 8 4 = =8
2
-• 0
8 ÍÎ 4 ˙˚
0
2
Ê 8ˆ 64 8
Var(X) = 8 - Á ˜ = 8 - =
Ë 3¯ 9 9
E[Y2] = E[Y2] – {E[Y]}2
• 5 5
Ê yˆ 1 y4
E[Y ] = Ú y fY ( y) dy = Ú y Á ˜ dy =
2 2 2
-•
Ë 12 ¯ 12 4
1 1
5.94 Probability Theory and Random Processes
1 Ê 624 ˆ
= = 13
12 ÁË 4 ˜¯
2
Ê 31ˆ 961 92
Var(Y) = 13 – Á ˜ = 13 - =
Ë 9¯ 81 81
Cov(X, Y) = E[XY] – E[X] E[Y]
248 8 Ê 31ˆ
= - =0
27 3 ÁË 9 ˜¯
Since Cov(X,Y) = 0, X and Y are uncorrelated.
5.74 X and Y are two independent random variables. X is a uniform random variable in the interval
(0, 5) and Y is a zero-mean unit variance Gaussian random variable. Find E[eX Y2]
5.75 For the pmf of random variables X and Y given in below Table, find the correlation and covariance,
and indicate whether the random variables are independent, orthogonal or uncorrelated.
Y
X –1 0 1
1 1
–1 — 0 —
6 6
1
0 0 — 0
3
1 1
1 — 0 —
6 6
E[X] = Â xi pX ( xi )
i
1 Ê 1ˆ Ê 1ˆ
= (-1) + 0Á ˜ + 1 Á ˜ = 0
3 Ë 3¯ Ë 3¯
Operations on Multiple Random Variables 5.95
1 1 Ê 1ˆ
E[Y] = Â yj pY ( y j ) = (-1)
3
+ (0) + 1 Á ˜ = 0
3 Ë 3¯
j
E[XY] = ÂÂ xi y j pXY ( xi yi )
j i
Ê 1ˆ Ê 1ˆ
= (–1) (–1) Á ˜ + (-1) (0) (0) + (-1) (1) Á ˜ + (0) (-1) (0)
Ë 6¯ Ë 6¯
1 Ê 1ˆ 1
+ (0) (0) + 0(1) (0) + 1(-1) Á ˜ + 1(0) (0) + 1(1)
3 Ë 6¯ 6
1 1 1 1
= - - + =0
6 6 6 6
E[XY] = 0. So X and Y are orthogonal
Cov(X,Y) = E[XY] – E[X] E[Y] = 0. Therefore uncorrelated
1
P[X = –1, Y = –1] =
6
1 1
P[X = –1] = ; P[Y = - 1] =
3 3
1
P[X = –1] P[Y = –1] =
9
P[X = –1, Y = –1] π P[X = –1] P[Y = –1]
Therefore, not independent.
Practice Problem
5.30 For the joint pmf of random variable (X, Y) given in solved problem 4.11, find the correlation, covariance, and
indicate whether the random variables are independent, orthogonal and uncorrelated.
(Ans. Correlated, not independent, not orthogonal)
Solved Problems
Ú x d ( x - k1 ) dx = k1
-•
5.96 Probability Theory and Random Processes
Ú y d ( y - k2 ) dy = k2
-•
• •
Ú Ú xy d ( x - k1 ) d ( y - k2 ) dx dy = k1 k2
-• -•
• •
RXY = E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy
-• -•
• •
= Ú Ú xy {0.1d ( x + 2) d ( y - a ) + 0.2 d ( x - 1) d ( x - 2) + 0.3 d ( x + a ) d ( y - 2) + 0.4 d ( x - a ) d ( y - a )}
-• -•
• • • •
= 0.1 Ú Ú xy d ( x + 2) d ( y - a ) dx dy + 0.2 Ú Ú xy d ( x - 1) d ( x - 2) dx dy
-• -• -• -•
• • • •
+ 0.3 Ú Ú xy d ( x + a ) d ( y - 2) dx dy + 0.4 Ú Ú xy d ( x - a ) d ( y - a ) dx dy
-• -• -• -•
= 0.1 (–2) (a) + 0.2(1)(2) + 0.3(–a) (2) + 0.4(a) (a) = –0.2a + 0.4 – 0.6a + 0.4a2
= 0.4a2 – 0.8a + 0.4
RXY = 0.4a2 – 0.8 a + 4
dRXY
= 0.8a - 0.8 = 0
da
For a = 1, the correlation is minimum.
The minimum correlation
RXY|min = 0.4(1)2 – 0.8(1) + 0.4 = 0
Since RXY = 0; X and Y are orthogonal.
0 0.2
Correlation is
• •
RXY = E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy = (0.3)(-1)(-2) + 0.05(-1)(3) + 0.2(0)(0)
-• -•
+ 0.15(1)(0) + 0.2(1)(1) + 0.1(1)(2)
= 0.6 – 0.15 + 0.2 + 0.2 = 0.85
Cov(X,Y) = E[XY] – E[X] E[Y]
We have
• • • •
E[X] = m10 = E[X] = Ú Ú x f X ,Y ( x, y) dx dy Ú Ú x d ( x - k1 ) d ( x - k2 ) dx dy = k1
-• -• -• -•
= 0.3(–1) + 0.05(–1) + 0.2(0) + 0.15(1) + (0.2)(1) + (0.1) (1) = 0.1
• •
m01 = E[Y] = Ú Ú y f X ,Y ( x, y) dx dy
-• -•
= 0.855
Correlation coefficient
Cov ( X , Y )
rXY =
s X sY
• •
Ú Úx d ( x - K1 ) d ( x - K 2 ) dx dy = K12
2
Var(X) = E[X2] – {E[X]}2
-• -•
• •
E[X2] = m20 = Ú Úx f X ,Y ( x, y) dx dy = 0.3(-1)2 + 0.05( -1)2 + (0.15)(1)2 + (0.2)(1)2 + (0.1)(1)2
2
-• -•
= 0.80
• • • •
E[Y2] = m02 = Ú Ú y 2 f X ,Y ( x, y) dx dy Ú Ú y 2d ( x - K1 ) d ( x - K 2 ) dx dy = K 22
-• -• -• -•
5.78 Let X and Y be two random variables each having three values –1, 0, 1 and having the following
joint probability distribution
Y
X –1 0 1
–1 0 0.2 0
0 0.1 0.2 0.1
1 0.1 0.2 0.1
Prove that X and Y have different expectations. Also prove that X and Y are uncorrelated and find Var(X)
and Var (Y).
Solution Redraw the table and sum the rows and columns to find marginal distribution of X and Y.
Y
X –1 0 1 Total
–1 0 0.2 0 0.2 P(X = –1)
E[X2] = Â xi2 px ( xi )
i
= (–1)2 (0.2) + (0)2 (0.4) + (1)2 (0.4)
= 0.2 + 0.4 = 0.6
Var(X) = 0.6 – (0.2)2 = 0.56
Var(Y) = E[Y2] – {E[Y]}2
E[Y2] = Â yi2 py ( yi )
j
Solution
(a) Given
xy
fX,Y (x, y) = 0 < x < 2; 0 < y < 3
9
=0 elsewhere
The marginal density function of X is
•
fX(x) =
Ú f X ,Y ( x, y) dy
-•
3 3
xy x y2 x
= Ú 9
dy =
9 2
=
2
for 0 < x < 2
0 0
The marginal density function of Y is
•
fY(y) = Ú f X ,Y ( x, y) dx
-•
2 2
xy y x2 2y
= Ú 9
dx =
9 2
=
9
for 0 < y < 3
0 0
xy
fX,Y(x, y) =
9
5.100 Probability Theory and Random Processes
x Ê 2 y ˆ xy
fX(x)fY(y) = =
2 ÁË 9 ˜¯ 9
fXY(x, y) = fX(x) fY(y)
Therefore, X and Y are statistically independent.
• • 23
( xy)2
(b) E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy = Ú Ú
9
dy dx
-• -• 00
1 2Ê 2 ˆ Ê3 ˆ
2 3 2
1
= Ú x Á Ú y dy˜ dx = Ú x 2 Á Ú y 2 dy˜ dx
9 0 Ë0 ¯ 9 0 Ë0 ¯
2 2
9 2 1 8
9 Ú0
= x dx = x 3 =
3 0 3
•
E[X] = Ú x f X ( x) dx
-•
2 2
1 x3 8
= Ú x ( x /2) dx = =
0
2 3 6
0
•
E[Y] = Ú y fY ( y) dy
-•
3 3
2 Ê 2y ˆ 2 y3
= Ú y ÁË 9 ˜¯ dy = 9 3 =2
0 0
8 8
E[X] E[Y] = (2) =
6 3
Since E[XY] = E[X] E[Y], X and Y are uncorrelated.
5.80 Consider two statistically independent random variables X and Y with mX = 3/4; E[X2] = 4; mY = 1
and E[Y2] = 5. For a random variable W = X – 2Y + 1, find (a) RXY, (b) RXW, (c) RYW, and (d) Cov (X, Y).
Solution Given: mX = 3 ; mY = 1
4
(a) RXY = E[XY]
Since X and Y are statistically independent,
Ê 3ˆ 3
RXY = E[XY] = E[X] E[Y] = mX mY = Á ˜ (1) =
Ë 4¯ 4
(b) RXW = E[X(X – 2Y + 1]
= E[X2] – 2E[XY] + E[X]
Operations on Multiple Random Variables 5.101
Ê 3ˆ 3
= 4 – 2 Á ˜ + = 3.25
Ë 4¯ 4
(c) RYW = E[Y(X – 2Y + 1)]
= E[XY] – 2E[Y2] + E[Y]
3
= - 2(5) + 1 = - 8.25
4
(d) Cov(X, Y) = E[XY] – mX mY
Since X and Y are statistically independent,
E[XY] = E[X] E[Y]
fi Cov (X, Y) = 0
5.81 Two random variables have a uniform density on a circular region defined by
Ï1 / p r 2 x 2 + y2 £ r 2
fX,Y(x, y) = ÔÌ
ÔÓ 0 elsewhere
Find the mean value of the function
g(X,Y) = X2 + Y2
Solution
Given g(X, Y) = X2 + Y2
Ï1 / p r 2 x 2 + y2 £ r 2
and fX,Y (x, y) = ÔÌ
ÔÓ 0 elsewhere
• •
E[g(X, Y)] = Ú Ú g( X , Y ) f X ,Y ( x, y) dx dy
-• -•
r r 2 - x2
x 2 + y2
= Ú Ú pr2
dy dx
-r y = - r 2 - x2
r
1 È 2 2 ˘
Ú ÍÎ2 x r 2 - x2 + (r - x 2 )3/2 ˙ dx
= 2 r
pr2 -r
3 ˚ –r r
2 È 2 2 ˘
r r
1
Ú x r - x 2 dx + Ú (r - x 2 )3/2 dx ˙
2
=
2
Í
p r ÍÎ - r 3 -r ˙˚ Fig. 5.29
X = r, q = p
2
2 ÔÏ Ô¸
p /2 p /2
1
E[g(X,Y)] = 2 Ì Ú
p r ÔÓ -p /2
r 2 sin 2 q r 2 cos2 q dq +
3 Ú r 3 cos3 q (r cos q dq ) ˝
- p /2 Ô˛
2(r 4 ) ÔÏ Ô¸
p /2 p /2
1
= 2 Ì
p r ÔÓ -p /2
Ú sin 2 q cos2 q dq +
3 Ú cos4 q dq ˝
- p /2 Ô˛
2r 2 ÏÔ p /2 (1 - cos 2q ) (1 + cos 2q ) 1
p /2
1 ¸Ô
= Ì Ú dq + Ú (3 + 4 cos 2q + cos 4q ) dq ˝
p ÓÔ -p /2 2 2 3 - p /2
8 ˛Ô
2r 2 ÏÔ p /2 Ê 1 - cos2 2q ˆ 1
p /2 ¸Ô
= Ì Ú Á ˜ dq + 24 Ú (3 + 4 cos 2q + cos 4q ) dq ˝
p ÓÔ -p /2 Ë 4 ¯ - p /2 ˛Ô
2r 2 ÏÔ p /2 Ê 1 - cos 4 q ˆ 1
p /2 ¸Ô
= Ì Ú Á ˜ dq + Ú (3 + 4 cos 2q + cos 4q ) dq ˝
p ÔÓ -p /2 Ë 8 ¯ 24 - p /2 Ô˛
2r 2 È 1 p /2 1
p /2 ˘
= Í q + q ˙
p
ÎÍ
8 -p /2 8 -p /2 ˙
˚
2r 2 È p ˘
=
p ÍÎ 4 ˙˚
r2
=
2
∂n + k f X ,Y (w1 , w 2 )
mnk = (- j )n + k (5.67)
w1 = 0
∂w1n ∂w 2k w2 = 0
Ú Úe
jw1 x
fX,Y(w1, w2) = e jw 2 y f X ( x ) fY ( y)dx dy
-• -•
• •
Úe f X ( x ) dx Ú e jw 2 y fY ( y) dx dy
jw1 x
=
-• -•
= fX(w1) fY(w2)
(iii) If X and Y are independent random variables then
fX+Y(w) = fX(w) fY(w)
fX + Y(w) = E[ej(X + Y)w] = E[ejwX ejwY] (5.69)
• •
Ú Úe
jw x jw y
= e f X ( x ) fY ( y) dx dy
-• -•
5.104 Probability Theory and Random Processes
• •
= Ú e jw x f X ( x ) dx Ú e jw y fY ( y) dy
-• -•
= fX(w) fY(w)
(iv) If X and Y are two random variables, the joint moments can be derived from the joint characteristic
function as
n+k
mnk = (–j)n + k ∂ f XY (w1 , w 2 )
∂w 1n ∂w 2k w1 = 0
w =0 2
REVIEW QUESTIONS
13. Define joint characteristic function.
14. What are the properties of a characteristic function?
= E[eu1 X eu2Y ]
• •
Ú Úe
u1 x u1 y
= e f X ,Y ( x, y) dx dy (5.70)
-• -•
For discrete random variables,
MX,Y(u1, u2) = ÂÂ eu x 1 m + u2 ym
pXY ( xm , yn ) (5.71)
m n
Solved Problems
5.82 Two random variables X and Y have the joint characteristic function
f X1 , X2 (w1 , w 2 ) = [(1 - j 2w1 )(1 - j 2w 2 )]- N /2 , N > 0
and integer.
(a) Find the correlation and moments m20 and m02.
(b) Determine the means of X1 and X2.
(c) What is the correlation coefficient?
∂n + k f X1 , X2 (w1 , w 2 )
n+k
mnk = (–j)
∂w1n ∂w 2k
∂2 f X1 , X2 (w1 , w 2 )
m20 = (–j)2
∂w12
w1 = 0; w 2 = 0
∂ È N ˘
N
- -1
Í- (1 - j 2w 2 )
- N /2
= ( - j )2 (1 - j 2w1 ) 2 (-2 j )˙
∂ w1 Í 2 ˙˚ w1 = 0
Î
w2 = 0
È N - N /2 Ê N ˆ
N
- -2 ˘
= - Í- (1 - j 2w 2 ) Á - - 1˜ (1 - j 2w1 ) 2 (-2 j )2 ˙
ÍÎ 2 Ë 2 ¯ ˙˚ w1 = 0
w2 = 0
= N(N + 2)
∂2f X1 , X2 (w1 , w 2 )
2
m02 = (–j)
∂w 22 w1 = 0
w2 = 0
È N - N /2 Ê N ˆ N ˘
- - 1˜ (1 - j 2w 2 ) 2 (-2 j )2 ˙
- -2
= Í- (1 - j 2w1 ) Á
Î 2 Ë 2 ¯ ˚ w1 = 0
w2 = 0
= N(N + 2)
Correlation
E[X1X2] = m11
∂2f X1 X2 (w1 , w 2 )
= (- j )
2
∂ w1 ∂ w 2 w1 = 0
w2 = 0
5.106 Probability Theory and Random Processes
È N ˘
= - ∂ Í- N (1 - j 2w )- 2 - 1 (- j 2) (1 - j 2w )- N /2 ˙
1 2
∂w 2 Í 2
Î ˙˚ww12 == 00
= ÈÊ N ˆ 2 N
- -1 - -1 ˘
N
- ÍÁ - ˜ (- j 2)2 (1 - j 2w1 ) 2 (1 - j 2w 2 ) 2 ˙w = 0
ÍÎË 2 ¯ ˙˚w12 = 0
2
=N
∂
E[X1] = m10 = - j f (w , w )
∂w1 X1 X2 1 2 ww12 == 00
È N N
- -1 ˘
= - j Í- (1 - j 2w1 ) 2 (- j 2) (1 - j 2w 2 )- N /2 ˙ =N
ÍÎ 2 ˙˚ ww1 == 00
2
Similarly, m01 = N
s X2 = E[ X 2 ] - {E[ X ]2 }
= m20 - (m10 )2
= N ( N + 2) - ( N )2
=2N
Similarly s2Y = 2N
Correlation coefficient
E[ X1 X 2 ] - E[ X1 ] E[ X 2 ]
r X1X2 =
s XsY
N 2 - N (N )
= =0
4N 2
5.83 If X and Y are independent Poisson random variables with parameters l1 and l2, show that the
conditional density function of X given (X + Y) is binomial.
Solution Given X and Y are independent passion random variables. The MGF of a Poisson random
u
l (e - 1)
variable is e . That is
u u
l (e - 1) l (e - 1)
MX(u) = e 1 and MY(u) = e 2
Let X + Y = Z then
MZ(u) = MX + Y(u) = MX(u) MY(u)
u
= e( l1 + l2 )(e - 1)
e - l1 (l1 ) x
That is, z is also a Poisson random variable with parameter (l1 + l2). That is, P(X = x) = ;
x!
e - l2 ( l 2 ) y e - ( l1 + l2 ) (l1 + l2 )z
P(Y = y) = and P(X + Y= z) =
y! z!
Operations on Multiple Random Variables 5.107
5.84 If X and Y are zero mean independent normal random variables with variances s12 and s22 then
prove that the random variable z = ax + by + c; c π 0 is also a normal random variable.
Solution Given: X ~ N(0, s12) and
Y ~ N(0, s22)
Z = ax + by + c
fz(w) = E[ejwz] = E[ejw(ax + by + c)]
= E[ejwa x] E[ejw(by)] E[ejwc]
= ejwc fX(aw) fY(bw)
For a normal random variable,
2 2
jwm + s ( jw j /2
fX(w) = e X X
mX = 0 and sX2 = s12
2 2
-s w /2
fX(w) = e 1
2 2
-s w /2
fY(w) = e 2
2 2 2 2 2 2
-a
fi fZ(w) = e
s1 w /2
e- b s 2w /2 jw c
e
2 2
jw c - ( a s1 + b2s 22 ) w 2 /2
= e
On comparing, we can find
Z ~ N(c, a2s12 + b2s22)
5.108 Probability Theory and Random Processes
5.85 Show that the joint characteristic function of N independent random variables Xi, having
characteristic function fXi(wi) is
N
f X1 , X2 , ..., X N (w1 , w 2 , w N ) = ’ f xi (w i )
i =1
Solution
f X1 , ..., X N (w1 , , w N ) = E Èe j (w1 X1 + + wN XN ) ˘
Î ˚
• •
= Ú Ú f X1 , ,XN ( x1 , , x N ) e j (w1 X1 + + wN XN )
dX1 dX N
-• -•
Since X1, X2 …, XN are independent,
f X1 , XN ( x1 , x N ) = f X1 ( x1 ) f XN ( xN )
• •
f X1 , , X N (w1 , , wN ) = Ú Ú f X1 ( x1 ) f X N ( x N )e jw x1 e jw xN dx1 , dx N
-• -•
• •
= Ú f X1 ( x1 ) e jw x1 dx1 Ú f X N ( x N ) e jw xN dx N
-• -•
= f X1 (w1 ) f X N (w N )
N
= ’ f x (w i )
i
i =1
5.86 For two zero-mean Gaussian random variables X and Y, show that their joint characteristic function
Ï 1 ¸
is fX,Y(w1, w2) = exp Ì- (s X2 w12 + 2 rs X s Y w1w 2 + s Y2 w 22 }˝ .
Ó 2 ˛
Solution
• •
fX,Y(w1, w2) = Ú Ú f X ,Y ( x, y) e jw1 x + jw 2 y dx dy
-• -•
È -1 Ê x2 2 r xy y2 ˆ ˘
exp Í 2 Á
- + 2 ˜˙
• • Í 2(1 - r ) ÁË s X s X s Y s y ˜¯ ˙
2
Î ˚ exp ( jw x + jw y) dx dy
= Ú Ú 2p s X s Y (1 - r 2 )
1 2
-• -•
È -1 Ê x2 ˆ˘
exp Í 2 Á 2
+ jw1 x ˜ ˙
•
ÍÎ 2(1 - r ) Ë s X ¯ ˙˚ •
1 È -1 Ê y 2 2 r xy ˆ ˘
=
Ú Ú sY
exp Í 2 Á 2 - ˜ + jw 2 y ˙ dy dx
ÍÎ 2(1 - r ) Ë s Y s X s Y ¯ ˙˚
-• 2p s X (1 - r 2 ) -•
I1
Operations on Multiple Random Variables 5.109
•
1 È -1 Ê y 2 2 r xy ˆ˘
I1 = Ú sY exp Í - - - ˜ ˙ dy
2
Let Á j 2(1 r ) w 2 y
ÎÍ 2(1 - r ) Ë s Y
2 2 s XsY
-• ¯ ˚˙
y
Let =t
1 sY
Let =a
2(1 - r 2 ) dy = s Y dt
•
È Ê 2r x ˆ˘
Ú exp ÍÍ-a ÁË t - t - j 2w 2 (1 - r 2 ) s Y t ˜ ˙ dt
2
I1 =
-• Î sX ¯ ˚˙
È 2rx ˘
• Í + j 2w 2 (1 - r 2 ) s Y ˙ at
= - at 2 Î sX
Úe e ˚ dt
-•
È 2rx ˘
Let b= Í + j 2w 2 (1 - r 2 ) s Y ˙ a
ÎÍ s X ˚˙
Then
2
• • Ê bˆ
- aÁ t -
- at 2 + bt b2 /4 a Ë 2 a ˜¯
I1 =
Úe at = e Úe dt
-• -•
Ê bˆ dz
Let a Át - ˜ = z / 2 then dt =
Ë 2a ¯ 2a
•
1 2
- z 2 /2
Úe
/4 a
I1 = eb dz
2a -•
•
- z 2 /2
Úe dz = 2p
-•
2
p b2 /4 a Ï rx ¸
I1 = e = 2p (1 - r 2 ) exp Ì + jw 2 (1 - r 2 ) s Y ˝ 2(1 - r 2 )
a s
Ó X ˛
• È -1 Ê x 2 ˆ ˘ ÏÈ r x ˘
2 ¸
1 Ô Ô
Ú exp Í + ˙ + - 2(1 - r 2 ) ˝ dx
fX,Y(w1, w2)= 2
2 Á 2 ˜
jw 1 x exp ÌÍ jw 2 (1 r )s Y˙
ÎÍ 2(1 - r ) Ë s X ¯
s
-• 2p s X ˚˙ ÔÓ Î X ˚ Ô˛
• È - x2 ˘
jw 2 s Y r x
= exp È- 1 (1 - r 2 ) w 22 s Y2 ˘
1
Í 2
Î
˙
˚
Ú sX exp Í 2 +
ÍÎ 2s X sX
+ jw1 x ˙ dx
˙˚
-•
2p I2
•
1 È - x2 jw 2 s Y r x ˘
Let I2 = Ú sX exp Í 2 +
ÍÎ 2s X sX
+ jw1 x ˙ dx
˙˚
-•
x
Let = m; then dx = s X dm
sX
5.110 Probability Theory and Random Processes
• •
È -m2 ˘ È m2 ˘
I2 = Ú ÍÍ 2
exp + jw s
2 Y m r + jms X 1˙
w dm = Ú exp Í- + j ( rw 2 s Y + w1s X )m ˙ dm
-• Î ˚˙ -• ÎÍ 2 ˚˙
Let j(rw2 sY + w1sx) = n
•
Ï 1 ¸ 2
Ú exp ÌÓ- 2 (m - 2 mn + n2 ˝ e n /2 dm
2
I2 =
-• ˛
• 1
2 - ( m - n )2
Úe
n /2 2
= e dm
-•
2
n /2
= e ( 2p )
È 1 ˘
exp Í- (1 - r 2 ) w 22s Y2 ˙
˚ exp Í {j ( rw 2s Y + w1s X } ˙ ( 2p )
È 2˘
fX, Y(w1, w2) = Î 2
2p Í 2 ˙
Î ˚
È 1 2˘ È 1 2 2 2 ˘
= exp Í- (1 - r ) w 2 s Y ˙ exp Í- ( r w 2 s Y + w1 s X + 2 rw1w 2 s X s Y )˙
2 2 2 2
Î 2 ˚ Î 2 ˚
È 1 ˘
= exp Í- (w 22 s Y2 + w12 s X2 + 2 rw1w 2 s X wY )˙
Î 2 ˚
5.87 If X and Y are independent gamma random variables with common parameters a and b, find the pdf
of (a) X + Y, and (b) X/Y.
Solution
(a) Given: Z = X + Y
The characteristic function of Gamma random variable is given by
1
fX(w) =
(1 - j w b)a
Similarly,
1
fY(w) =
(1 - j w b)a
fZ(w) = fX+Y(w) = fX(w) fY(w)
1 1 1
= =
(1 - jw b)a (1 - jw b)a (1 - jw b)2 a
Therefore, X + Y is a gamma random variable with parameters 2a and b.
x a - 1 e- x / b y a - 1e - y / b
(b) Given: Z = X/Y fX(x) = and fY ( y) =
G (a) b a
G (a) b a
Operations on Multiple Random Variables 5.111
a -1 •
(z)
= = Ú y 2 a - 1 e - y(1 + z )/ b dy
[G (a ) b a ]2 0
Let (1 + z)y|b = u
b du
fi dy =
1+ z
• 2a - 1
( z )a - 1
Ê bu ˆ Ê b ˆ
fi fZ(z) =
a 2 Ú Á1 + z˜
e-u Á du
[G (a) b ] 0 Ë ¯ Ë 1 + z ¯˜
2a
( z )a - 1 Ê b ˆ
= G (2 a )
[G (a ) b a ]2 ÁË 1 + z ˜¯
( z )a - 1 G (2 a )
=
[G (a )]2 (1 + z )2 a
Practice Problem
5.31 If X and Y are independent gamma random variables with parameters (a1, b) and (a2, b) then find the pdfs of the
random variables.
(a) Z = X + Y (b) Z = X/(X + Y) (c) Z = X/Y.
(Ans. (a) Gamma distribution (b) beta distribution with parameter (a1 + a2, b))
Solved Problems
5.88 X and Y are independent, identically distributed binomial random variables with parameters n and
p. Show that Z = X + Y is also a binomial random variable.
Similarly,
MY(u) = [peu + (1 – p)]n
MX + Y(u) = MX(u) MY(u) = [peu + (1 – p)]2n
Therefore, (X + Y) is a binomial random variable with parameters 2n and P
z
= p2 Â q z = p2 q z (z + 1), z = 0, 1, 2...
k=0
5.90 If X and Y are independent experimental random variables with common parameter l1, show that
X/(X + Y) is a U(0, 1).
y -x
∂ z /∂ x ∂ z /∂ y
J(x, y) = = ( x + y )2 ( x + y )2
∂ w /∂ x ∂ w /∂ y
0 1
y -w (1 - z )2
fi J(x, y) = = =
( x + y) 2
Ê w ˆ
2 w
ÁË 1 - z ˜¯
f X ,Y ( x, y)
fZ,W(z, w) =
J ( x, y )
Operations on Multiple Random Variables 5.113
f X ( x ) fY ( y) l 2 e - l ( x + y )
= =
| J ( x, y ) | | J ( x, y ) |
wl 2
= e - l w (1 - z )
(1 - z )2
•
wl 2
fZ(z) = Ú (1 - z)2 e - l w (1 - z ) dw
0
lw
•
Let =t
l 2 1- z
= Ú (1 - z)2 we - l w / (1 - z ) dw
l
0 dw = dt
•
(1 - z )
-t
= Út e dt
0
=1
fi fZ(z) = 1 for 0 £ z £ 1
5.91 Consider random variables Y1 and Y2 related to arbitrary random variables X and Y by the coordinate
rotation.
Y1 = X cos q + Y sin q; Y2 = –X sin q + Y cos q
Find the covariance of Y1 and Y2. For what value of q are the random variables Y1, and Y2 uncorrelated?
Solution Given: Y1 = X cos q + Y sin q. If mX and mY are mean of X and Y respectively then mY1 = mX
cos q + mY sin q.
Similarly,
mY2 = –mX sin q + mY cos q
= E[{X cos q + Y sin q – mX cos q – mY sin q}{–X sin q + Y cos q + mX sin q – mY cos q}]
= E{[(X – mX) cos q + (Y – mY) sin q] [–(X – mX) sin q + (Y – mY) cos q]}
= E[–(X – mX)2 cos q sin q + (Y – mY)2 sin q cos q – (X – mX) (Y – mY) sin2 q
5.92 Two Gaussian random variables X and Y have variances sX2 = 9 and sY2 = 4 respectively and
p
correlation coefficient r. It is known that a coordinate rotation by an angle results in new random
variables Y and Y that are uncorrelated. What is r? 8
2
Solution Since the rotation results in new random variables Y1 and Y2, the relation between covariance
of X, Y and Y1, Y2 is
1
Cov(Y1, Y2) = (sY2 – sX2) sin 2 q + Cov (X, Y) cos 2 q
2
Given: sX2 = 9; sY2 = 4 fi sX = 3 and sY = 2
p
Also q=
8
1 Êpˆ Êpˆ
Cov (Y1, Y2) = (4 - 9) sin 2 Á ˜ + r s X s Y cos 2 Á ˜
2 Ë 8¯ Ë 8¯
5 Êpˆ p
= - sin Á ˜ + r(6) cos
2 Ë 4¯ 4
\ for uncorrelated random variables, Cov (Y1, Y2) = 0
5 Êpˆ
sin Á ˜
2 Ë 4¯ 5 Êpˆ
fi r = = tan Á ˜
p 12 Ë 4¯
6 cos
4
5
=
12
5.93 Two Gaussian random variables X and Y have variances sX2 = 16 and sY2 = 9 respectively with
correlation coefficient r. It is known that a coordinate rotation by an angle p/6 results in new random
variables Y1 and Y2 that are uncorrelated. What is r?
16 - 9 Êpˆ
= 2(4)(3) tan 2 ÁË 6 ˜¯
7 Êpˆ
= tan Á ˜
24 Ë 3¯
= 0.505
5.94 X and Y are random variables with joint Gaussian pdf with sX2 = sY2 and r = 1. Find a transformation
matrix such that new random variables Y1 and Y2 are statistically independent.
Solution The angle of rotation
1 Ê 2r s s ˆ 1 2 s xs y 1 1 Êpˆ
q= tan -1 Á 2 X 2Y ˜ = tan -1 2 = tan -1 (•) = Á ˜
2 Ë s X - sY ¯ 2 s X = sY 2 2 2 Ë 2¯
p
fi q=
4
The transformation matrix
È 1 1 ˘
È cos q sin q ˘ Í 2 ˙
2˙
[T] = Í ˙=Í
Î- sin q cos q ˚ Í 1 1 ˙
Í= ˙
Î 2 2˚
5.95 The two Gaussian random variables X and Y have first and second-order moments mX = 1; E[X2] =
2.5, mY = 1.8, E[Y2] = 3.6 and RXY = 2.2. Find (a) Cov(X, Y), and (b) r. Also, find the angle q of a coordinate
rotation that will generate new random variables that are statistically independent.
Solution
(a) Cov (X, Y) = E[(X – mX) (Y – mY)]
= E[XY] – mX mY
= 2.2 – 1(1.8) = 0.4
(b) sX = E[X2] – (mX)2 = 2.5 – (1)2 = 1.5
2
5.96 The random variable X1 and X2 are statistically independent with U(0, 1). Prove that the random
variables Y1 and Y2 generated by the following transformation are N(0, 1)
Y1 = -2 ln ( X1 ) cos (2p X 2 )
Y2 = -2 ln ( X1 ) sin (2p X 2 )
5.116 Probability Theory and Random Processes
Solution Given:
fX1,X2 (x1, x2) = 1 for 0 < x1 < 1, 0 < x2 < 1
= 0 otherwise
Y1 = -2 ln ( X1 ) cos (2p X 2 )
Y2 = -2 ln ( X1 ) sin (2p X 2 )
2
- (Y + Y22 )/2
Y12 + Y22 = –2 ln X1 fi X1 = e 1
1
fY1, Y2 (y1, y2) = f (x , x )
| J | X1 , X2 1 2
- 12
∂y1 ∂y2 cos(2p x2 ) (-2 ln x1 )
- 2p -2 ln x1 sin 2p x2
∂x1 ∂x1 x1
J= =
∂y2 ∂y2 sin(2p x2 ) (-2 ln x1 )-1/2
∂x1 ∂ x2 - 2p -2 ln x1 cos 2p x2
x1
2p -2p
= - =
x1 e - ( y12 + y22 )/2
1 - ( y12 + y22 )/2
fY1, Y2 (y1, y2) = e f X1 , x2 ( x1 , x2 )
2p
1 - ( y12 + y22 )/2
fi fY1 ,Y2 ( y1 , y2 ) =
e
2p
From the above equation, we can find that the joint distribution of Y1 and Y2 is normal.
s Y2 / X = E ÈÎY 2 /X = x ˘˚ - {E[Y / X = x}
2
(5.81)
Solved Problems
1 - y e- x / y
= e = e- y
y 1
-
y 0
1 -x/y -y
e e
y 1 -x/y
f(x/y) = = e
e- y y
•
Ê 1 -x/y ˆ
Úx
2
E[X2/Y = y] = ÁË y e ˜¯ dx
0
•
1
= Úx
2
e - x / y dx
y 0
•
• •
= - x 2 y e- x / y - 2 xy 2 e - x / y + Ú 2 y 2 e - x / y dx
0 0
0
•
= -2 y3 e - x / y = 2 y3
0
5.118 Probability Theory and Random Processes
y y
x3 1 x4 y3
= Ú y
dx =
y 4
=
4
0 0
LINEAR TRANSFORMATION OF
GAUSSIAN RANDOM VARIABLES 5.10
Consider a set of N Gaussian random variables X1, X2, …, XN with joint density function
1 ÏÔ ( x - X )T C X-1 ( x - X ) ¸Ô
f X1 , X2 , , XN ( x1 , x2 , º, x N ) = 1/2
exp Ì ˝ (5.85)
(2p ) N /2 [C X ]-1 ÓÔ 2 Ô˛
Let these random variables be linearly transformed to a new set of random variables Y1, Y2, …, YN using
the following relation
Y1 = a11 X1 + a12 X2 + … + a1N XN
(5.86)
YN = aN1 X1 + aN2 X2 + … + aNN XN
where aij, i = 1, … N, j = 1, N are real numbers.
Equation (5.86) can be represented in matrix form as
È Y1 ˘ È X1 ˘
Í ˙ Í ˙
Í ˙ = [T ] Í ˙
ÍÎYN ˙˚ ÍÎ X N ˙˚
Operations on Multiple Random Variables 5.119
È a11 a1N ˘
or [Y] = Í a21 ˙
a2 N ˙ is nonsingular.
Í
ÍÎaN 1 aNN ˙˚
[ X ] = [T ]-1 [Y ] (5.89)
È 11 ˘
1N
È X1 ˘ Í a a
˙ È Y1 ˘
Í ˙ Í 21 ˙Í ˙
Í ˙ = Ía a 2N ˙ Í ˙ (5.90)
ÍÎ X N ˙˚ Ía N 1 a NN ˙ ÍÎYN ˙˚
ÍÎ ˙˚
X1 = a11 Y1 + + a1N YN
(5.91)
X N = a Y1
N1
+ + a NN
YN
∂X N
= a NN
∂YN
In general, we can write
∂X i
= a ij (5.92)
∂Yi
∂x N ∂x N ∂x N a N1 aN 2 a NN
∂y1 ∂y2 ∂y N
-1 1
fi |J| = [T ] = (5.94)
[T ]
If [CX] represents the covariance matrix of random variables X1, X2, …, XN then ijth element of the
covariance matrix CXiXj can be obtained as
C Xi X j = E[( Xi - Xi ) ( X j - X j )]
Note that
ÈÎCY ˘˚ = [T ] ÈÎC X ˘˚ [T ] t
Solved Problems
È 5 2 ˘ È 2 - 1˘ È5 - 1˘
CY = Í ˙ Í ˙ Í ˙
Î - 1 3 ˚ Î - 1 2 ˚ Î2 3 ˚
È 8 - 1˘ È5 - 1˘ È 38 - 11˘
= Í ˙ Í ˙ = Í ˙
Î - 5 7 ˚ Î2 3 ˚ Î- 11 26 ˚
|[CY]| = 38(26) – (–11)2 = 867
1 È26 11˘
[CY]–1 = Í ˙
867 Î11 38˚
1.154 ¥ 10 - 3 ÏÔ 1 È y ˘ Ô¸
fY1 ,Y2 ( y1. y2 ) = exp Ì- [ y1 y2 ] [CY -1 ] Í 1 ˙ ˝
2p ÔÓ 2 Î y2 ˚ ˛Ô
Solution
E[X12] = s X1 + ( X1 ) = 4 + (1) = 5
2 2 2
(a)
E(X22) = s X2 + ( X 2 ) = 9 + (2) = 13
2 2 2
(b)
C X1 X2 - 2 -1
(c) r X1 X2 = = =
s X1 s X2 2(3) 3
(d) [CY] = [T] [CX] [T]t
È 2 1˘ t È2 - 1˘
T= Í ˙; T = Í ˙
Î - 1 1˚ Î1 1 ˚
È s X2 C X1 X2 ˘ È 4 - 2 ˘
[CX] = Í ˙=Í
1
(e) ˙
ÍC s X2 2 ˙˚ Î- 2 9 ˚
Î X1 X2
È 2 1˘ È 4 - 2 ˘ È2 - 1˘
(f) CY = Í ˙Í ˙Í ˙
Î- 1 1˚ Î- 2 9 ˚ Î1 1 ˚
È 6 5 ˘ È2 - 1˘ È17 - 1˘
= Í ˙Í ˙=Í ˙
Î- 6 11˚ Î1 1 ˚ Î- 1 17 ˚
fi s Y21 = 17; s Y22 = 17; CY1Y2 = - 1
The capital letters are used to represent random variables in a sample, whereas a particular value taken by
each of the random variable is denoted by x1, x2, x3, º, xn.
The random samples from a population can be obtained in two ways. (i) with replacement, and (ii) without
replacement. In the sampling with replacement, the items that are drawn are put back into the population.
Therefore, the random variables X1, X2, X3, º Xn have the same distribution function. If the successive
drawings are independent then the random variables X1, X2, X3, º Xn are i.i.d.
In case of sampling without replacement, the probability distribution of random variables X1, X2, º, Xn are
no longer i.i.d. For example, if there is one defective resistor among N resistors then the probability of getting
a defective resistor during the first draw is 1/N. Since the resistor is replaced after every draw, the probability
remains same for subsequent draws and hence, the probability distribution of random variables X1, X2, º Xn
is i.i.d. On the other hand, in sampling without replacement, the probability of drawing a defective resistor
during the first draw is 1/N. If the resistor is not replaced, after the first draw and one that is drawn in the first
1
draw is not defective then the probability of drawing a defective resistor during the second draw is .
N -1
For small values of N, the difference would be significant, hence the distribution of random variables X1, º,
Xn is not i.i.d. However, if the population size is large, we can approximate the distribution to be i.i.d.
Since each Xi in the sample is a random variable, the sample mean is also a random variable. Therefore, it
is also characterized by mean and variance.
The expected value of the sample mean is given by
È1 n ˘
E[X] = E Í Â Xi ˙ (5.100)
ÍÎ n i =1 ˙˚
n
1 1
=
n
 E[ Xi ] = n [n m ] = m (5.101)
i =1
That is, the sample mean equals the true mean on the average.
È1 n ˘
Var( X ) = Var Í Â Xi ˙ (5.102)
ÎÍ n i =1 ˚˙
1 n
1 s2
=
n2
 Var ( Xi ) = n2 (n s 2 ) = n
(5.103)
i =1
That is, the variance of the sample mean is reduced by increasing the sample size n and variance of sample
mean goes to zero as n Æ •.
5.124 Probability Theory and Random Processes
Sample variance is also a random variable. The expectation of a sample variance is given by
1 È n ˘
E[S2] = E Í Â ( X i - X )2 ˙
n - 1 ÎÍi = 1 ˚˙
1 n
= Â E [ Xi - X ]2
n - 1 i =1
1 n
= Â E [( Xi - m ) - ( X - m )2 ]
n - 1 i =1
(5.106)
1 È n ˘
Í Â E [( Xi - m ) + E [( X - m ) )] - 2 E[( Xi - m ) ( X - m )]˙
2 2
=
n - 1 ÍÎi = 1 ˙˚
1 È n Ê 2 s2 ˆ Ê n ˆ˘
= ÍÂ Á s + ˜ - 2 E[( X - m ) Á Â ( Xi - m )˜ ˙
n - 1 Íi = 1 Ë n ¯ Ëi =1 ¯ ˙˚
Î
1 È n Ê 2 s2 ˆ ˘
= Í Ás + ˜¯ - 2 E[( X - m ) [( X1 - m ) + … + ( X n - m )]˙
n - 1 ÍÎi = 1 Ë n ˙˚
1 È n Ê s2 ˆ ˘ 1 È n Ê s2 ˆ 2˘
= n - 1 ÍÍ Â ÁË s + n ˜¯ - 2 E[( X - m ) (nX - nm )]˙˙ = n - 1 ÍÍ Â ÁË s + n ˜¯ - 2 nE ÈÎ X - m ˘˚ ˙˙
2 2
Î
i =1 ˚ i =1Î ˚
1 È ÔÏ 2 s Ô¸ ˘
n 2
s2
= Í Â
n - 1 Íi = 1
Ìs + ˝ - 2s ˙
n Ô˛
2
˙
∵ E[ X - m ]2 =
Î ÓÔ ˚ n
1 1
= [ ns 2 - s 2 ] = (n - 1) s 2
n -1 n -1
= s2
That is, the sample variance is equal to the true variance of the population on the average.
Operations on Multiple Random Variables 5.125
Solved Problems
5.101 Let X be a random variable with pdf fX (x) and CDF FX (x). Let (X1, X2, º, Xn) be a random
sample X. Let us define
W = max (X1, X2, º, Xn)
and Z = min (X1, X2, º, Xn).
Then find the pdf of W and Z.
Solution
Given W = max(X1, X2, º, Xn); FX (x) = P[X £ x]
The CDF of W is
FW (W) = P[W £ w]
The event (W £ w) is equivalent to the event (Xi £ w for all i)
Since Xi are independent,
FW (w) = P(W £ w)
= P[X1 £ w, X2 £ w, º Xn £ w]
= P[X1 £ w] P[X2 £ w] º P[Xn £ w]
= FX(x) FX (x) º FX (x)
= [FX (x)]n (5.109
n -1 d
fW (w) = n ÈÎ FX ( x )˘˚ F ( x)
dx X
n -1
= n ÈÎ FX ( x )˘˚ fX ( x)
Z = min [X1, º, Xn]
The CDF
FZ (z) = P(Z £ z) = 1 – P(Z > z)
= 1 – {P(X1 > z, X2 > z º, Xn > z)}
5.126 Probability Theory and Random Processes
5.102 If X1, º, Xn are independent random variable, and if X has exponential distribution with parameter
a then prove that Z = min [X1, º, Xn] has exponential distribution with parameter na.
Practice Problem
5.32 If X1, X2, X3, X4 are i.i.d exponential random variables with the parameters l, compute P{min (X1, º X4) < a} and
P{max (X1, º, X4) £ a}.
[Ans: 4e–4x
and 4[1 – e–x]3 e–x]
Solved Problems
Solution
(a) We have Z = cos X + j sin Y
The mean value of Z is
E[Z] = E[cos X + j sin Y] = E[cos X] + j E [sin Y]
•
E[cos X] = Ú cos x f X ( x ) dx
-•
Given fX (x) is uniformly distributed from –p to p.
Therefore,
1
fX (x) = for –p to p
2p
=0 otherwise
p p
1 1
E[cos X] = Ú 2p
cos x dx =
2p
sin x
-p
=0
-p
5.128 Probability Theory and Random Processes
Similarly,
p p
1 -1
E[sin Y] = Ú 2p
sin y dy =
2p
cos y
-p
=0
-p
fi E[Z] = 0
(b) Var(Z) = E[|Z|2] – {E[Z]}2
= E [|Z|2] = E[cos2 X + sin2 Y]
= E [cos2 X] + E[sin2 Y]
È 1 + cos 2 X ˘ 1 È cos 2 X ˘
E[cos2 X] = E Í ˙˚ = 2 + E ÍÎ 2 ˙˚
Î 2
p
1
E[cos 2 X] =
2p Ú cos 2 x dx =0
-p
1
fi E[cos2 X] =
2
1
E [sin2 Y] = E[1 - cos 2Y ]
2
1 1
= - E[cos 2Y ]
2 2
= 1
2
1 1
Var [Z] = + = 1
2 2
REGRESSION 5.13
The term regression is used to analyze the statistical relation between two or more variables. Figure 5.30
shows the relation between two random variables X and Y.
y y
x x
In Fig. 5.30 the sample points are clustered around a straight line. So a straight line gives the best estimate
to the value of one variable for any specific value of the other variable. When we fit the data with a straight
line then it is known as line of regression. In Fig. 5.31, the sample points are clustered around a curve.
Therefore, a curve gives the best estimate which is called curve of regression.
Operations on Multiple Random Variables 5.129
The parameters aˆ and bˆ are estimated using the method of least square, where the sum of squared error
n
 ei2 is minimized.
i =1
To find the value of aˆ and bˆ , differentiate the above equation with respect to aˆ and bˆ .
∂E N
= 2 Â ( yi - aˆ - bx
ˆ ) (-1) = 0
i
∂aˆ i =1
N N
fi  yi = aN
ˆ + bˆ Â xi (5.125)
i =1 i =1
5.130 Probability Theory and Random Processes
∂E N
= 2 Â ( yi - aˆ - bx
ˆ ) (- x ) = 0
i i
ˆ
∂b i =1
N N N
 xi yi -  ax
ˆ i - Â bˆ xi2 = 0 (5.126)
i =1 i =1 i =1
N N N
fi  xi yi =  ax
ˆ i + Â bx
ˆ 2
i
i =1 i =1 i =1
N N
= aˆ Â xi + bˆ Â xi2 (5.127)
i =1 i =1
N N
1 1
From Eq. (5.125), aˆ =
N
 yi - bˆ N  xi
i =1 i =1
= mY - bˆ m X (5.128)
N N N
 xi yi = (mY - bˆ m X )  xi + bˆ  xi2
i =1 i =1 i =1
N N N
= mY  xi - b m X  xi + b  xi
ˆ ˆ 2
i =1 i =1 i =1
N ÈN ˘
 xi yi - N m X mY = bˆ Í xi2 - N m X2 ˙
ÍÎ i =1 ˙˚
i =1
N
 xi yi - N m X mY
i =1
bˆ = N
(5.129)
 xi2 - N m X2
i =1
N
 ( xi - m X ) ( yi - mY )
i =1 Cov ( X , Y )
= = (5.130)
N
s X2
 ( xi - m X )2
i =1
 X ÂY
 XY - N N2 N XY -  X  Y
bˆ = = (5.131)
(Â ( X )) N Â X 2 - (Â X )
1 2 2
 X2 - N N2
Cov ( x, y)
We have r =
s XsY
s
fi bˆ = r Y
sX
ˆ ) + bx
yˆi = ( y - bx ˆ
i
Operations on Multiple Random Variables 5.131
( yˆi - y ) = bˆ (xi - x )
sY
= r ( xi - x ) (5.132)
sX
Since we are predicting the value of y for any given x. Equation (5.132) is known as regression line of y
on x.
where xˆi is predicted value of xi. The parameters aˆ1 and bˆ1 can be estimated by minimizing the sum of
squared errors.
The error is given by e i = xi - xˆi
The sum of square errors
N
e = Â ( xi - xˆi )2 (5.135)
i =1
N
 ( xi - aˆ1 - bˆ1 yi )2
i =1
∂e N
= 2 Â ( xi - aˆ1 - bˆ1 yi ) (-1) = 0
∂â1 i =1
N N
fi  xi = aˆ1 ( N ) + bˆ1  yi (5.136)
i =1 i =1
∂e N
= 2 Â ( xi - aˆ1 - bˆ1 yi ) (- yi ) = 0
∂b̂1 i =1
N N N
= Â xi yi - Â aˆ1 yi - Â bˆ1 yi2 = 0
i =1 i =1 i =1
N N
= Â xi yi - aˆ1 N y - bˆ1 Â yi2 = 0 (5.138)
i =1 i =1
5.132 Probability Theory and Random Processes
N N
= Â xi yi - N ( x - bˆ1 y ) y - bˆ1 Â yi2 = 0
i =1 i =1
N
 xi yi - N x y N
( xi - x )( yi - y )
i =1
b̂1 = N
=Â N
(5.139)
 yi2 -Ny 2 i =1
 ( yi - y ) 2
i =1 i =1
Cov ( X , Y )
= (5.140)
s Y2
rsX
=
sY
rs X
( xˆi - x ) = ( yi - y ) (5.141)
sy
N S XY - SX SY
b̂1 = (5.142)
N S Y 2 - ( SY ) 2
sY
fi byx = r (5.144)
sX
The regression line of X on Y is
r sX
(x - x ) = (y - y) (5.145)
sY
= bxy ( y - y ) (5.146)
r sX
bxy =
sY
where byx is the coefficient of x where bxy is the coefficient of y
byxbxy = r2
Solved Problems
5.104 The following distribution gives the likely price of a commodity in two cities x and y.
x y
Mean 65 67
SD 2.5 3.5
Coefficient of correlation between x and y is 0.8. Find
(i) Regression line of y on x
(ii) The likely price of y when x = 70.
Solution
Given: x = 65; y = 67; s x = 2.5 and s y = 3.5
Also, r = 0.8
Regression line of y on x is given by
sy
(y - y) = r (x - x )
sx
Ê 3.5 ˆ
( y - y ) = 0.8 Á (x - x )
Ë 2.5 ˜¯
(y – 67) = 1.12 (x – 65)
y = 1.12x – 5.8
When x = 70 y = 1.12(70) – 5.8 = 72.6
5.105 Calculate the correlation coefficient and the lines of regression from the following data.
x 62 64 65 69 70 71 72 74
y 126 125 139 145 165 152 180 208
Solution
x y xy x2 y2
62 126 7812 3844 15876
64 125 8000 4096 15625
65 139 9035 4335 19321
69 145 10005 4761 21025
70 165 11550 4900 27225
71 152 10792 5041 23104
72 180 12960 5184 32400
74 208 15392 5476 43264
Sx = 547 Sy = 1240 Sxy = 85546 Sx2 = 37527 Sy2 = 197840
5.134 Probability Theory and Random Processes
1
x = 68.375; y = 155; Sxy = 10893.25
N
1 1
s x2 = Sx 2 - x 2 = (37527) - (68.375)2 = 15.73
N 8
1 1
s y2 = Sy - y 2 = (197840) - (155)2 = 705
N 8
1 85546
Sxy - x y - (68.375) (155)
rxy = N
= 8 = 0.905
s X sY (3.96) (26.55)
The regression line of x on y is
sx
x-x=r (y - y)
sy
Ê 3.96 ˆ
(x – 68.375) = 0.905 Á ( y - 155) = 0.135 (y – 155)
Ë 26.55 ˜¯
x = 0.135 y + 47.45
The regression line of y on x is
sy
(y - y) = r (x - x )
sx
Ê 26.55 ˆ
(y – 155) = 0.905 Á ( x - 68.375) = 6.076 (x – 68.375)
Ë 3.96 ˜¯
y = 6.076x – 260.44
5.106 The following table gives the data on rainfall and discharge in a certain river. Obtain the line of
regression of y and x.
Rainfall (inches) X 1.53 1.78 2.60 2.95 3.42
Discharge 100 c.c. Y 33.5 36.3 40.0 45.8 53.5
Solution
Rainfall X (inches) Discharge Y (100 cc) xy x2 y2
1.53 33.5 51.255 2.34 1122.25
1.78 36.3 64.14 3.17 1317.69
2.60 40.0 104 6.76 1600
2.95 45.8 135.11 8.7025 2097.64
3.42 53.5 182.97 11.696 2862.25
Sx = 12.28 Sy = 209.1 Sxy = 537.949 Sx2 = 32.668 Sy2 = 8999.83
Operations on Multiple Random Variables 5.135
1 1
Sxy - x y (537.949) - (2.456) (41.8)
r = N 4.929 = 0.958
= 5 =
sx sy (0.7085)(7.261) 5.144
0.958 (7.261)
(y – 41.8) = ( x - 2.456)
0.7085
y – 41.8 = 9.81(x – 2.456)
y = 9.81x + 17.706
5.107 Can Y = 5 + 3.8 X and X = 3 – 0.5Y be the estimated regression equations of Y on X and X on Y
respectively? Explain your answer.
Solution Given: X = 3 – 0.5 Y and Y = 5 + 2.8 X from which we can get bxy = –9.5 and byx = 2.8
We know
r2 = bxy byx = (–0.5) (2.8) = –1.4
r = -1.4 which is imaginary
Therefore, the lines cannot be the estimated regression equations.
(Contd.)
157 159 –3 –1 19 1 3
160 160 0 0 0 0 0
161 162 1 2 1 4 2
164 161 4 1 16 1 4
166 164 6 4 36 16 24
Sx = 1265 Sy = 1274 Sx = –15 SY = –6 SX2 = 251 SY2 = 78 SXY = 135
15
X=- = - 1.875
8
6
Y = - = - 0.75
8
x = X + 160 = 158.125
y = Y + 160 = 160 - 0.75 = 159.25
N S XY - SX SY
byx =
N SX 2 - ( SX ) 2
8(135) - (-15)(-6) 990
= = = 0.555
8(251) - (-15) 2 1783
N S XY - SX SY
bxy =
N SY 2 - ( SY ) 2
8(135) - (-15)(-6) 990
= = = 1.683
8(78) - (-6) 2 588
The regression line of y on x is
( y - y ) = byx ( x - x )
y – 159.25 = 0.555 (x – 158.125)
y = 0.555x + 71.49
when x = 154
y = 156.96
The regression line of x on y is
( x - x ) = bxy ( y - y )
(x – 158.125) = 1.683(y – 159.25)
x = 1.683y – 109.76
Ê 1 - r2 s X sY ˆ
5.109 Show that the angle between the lines of regression lines is tan -1 Á 2˜
Ë r s X + sY ¯
2
Operations on Multiple Random Variables 5.137
(1 - r 2 ) (s x s y )
= (5.150)
r(s x2 + s y2 )
5.110 Let x1, x2 and x3 be uncorrelated random variables each having the same standard deviation. Find
the correlation coefficient between (x1 + x2) and (x2 + x3).
= s x2 = s x
2 2
= s x21 + s x22 = 2s x2
Similarly, s v2 = 2s x2
s x2 1
ruv = =
2s x2 2s x2 2
5.111 For ten observations in price X and supply Y, the following data were obtained.
Sx = 130; Sy = 220; Sx2 = 2288; Sy2 = 5536 and Sxy = 3467. Obtain the line of regression of Y on
X and estimate the supply when the price is 16 units.
Solution
Sx 130 Sy 220
x= = = 13; y = = = 22
n 10 n 10
1 2288
sx2 = Sx 2 - ( x ) 2 = - (13)2 = 59.8
N 10
sx = 7.73
1 5506
sy2 = Sy 2 - ( y ) 2 = - (22)2 = 66.6 fi s y = 8.16
N 10
1 3467
Sxy - x y - (13) (22)
r = N = 10 = 0.964
s XsY (7.73) (8.16)
Operations on Multiple Random Variables 5.139
5.112 If the lines of regression of x on y and y on x are a1x + b1y + c1 = 0 and a2x + b2y + c2 = 0
respectively, prove that a2b1 < a2b2.
Solution The regression of x on y is
a1x + b1y + c1 = 0
fi a1x = –b1y – c1
b1 c
x= - y- 1 (5.151)
a1 a1
The regression of y on x is
a2x + b2y + c2 = 0
b2y = –a2x – c2
a2 c
y= - x- 2 (5.152)
b2 b2
b1
From Eq. (5.151), bxy = -
a1
a2
And from Eq. (5.152), byx = -
b1
Ê b ˆÊ a ˆ
We have r2 = bxy byx = Á - 1 ˜ Á - 2 ˜
Ë a1 ¯ Ë b2 ¯
a2 b1
=
a1 b2
We have r2 £ 1
a2 b1
fi £ 1 fi a2 b1 £ a1 b2
a1 b2
5.113 If y = 2x – 3 and y = 5x + 7 are the two regression lines, find the mean values of x and y. Find the
correlation coefficient between x and y. Find an estimate of x when y = 1.
5.140 Probability Theory and Random Processes
Solution
Let x and y are the mean values of x and y respectively. Since the mean values satisfy the regression lines,
we can write
y = 2x - 3 fi 2x - y = 3
and y = 5x + 7 fi 5x - y = - 7
Solving for x and y , we get
10
x=-
3
Ê 10 ˆ
5 Á- ˜ - y = -7
Ë 3¯
- 50 29
y= +7=-
3 3
Let the equation of line of regression of x and y is
y = 2x – 3
1
fi x= y + 1.5
2
1
From which bxy =
2
Similarly the equation of line of regression of y on x is
y = 5x + 7
fi byx = 5
5 5 which is greater than one. Therefore, we have to change
We have r2 = bxy byx = from which r =
2 2
the assumption of equations.
Now let the equation of line of regression of y on x is
y = 2x – 3
fi byx = 2
Let the equation of line of regression of x on y is
y = 5x + 7
y 7
x= -
5 5
1
bxy =
5
2
r2 = bxy byx =
5
2
r = = 0.6324
5
Operations on Multiple Random Variables 5.141
5.114 The joint pdf of two random variables X and Y is given by fX,Y(x, y) = 8xy 0 £ x £ y £ 1 and zero
otherwise obtain the regression curve of X on Y.
Solution
y=1 x=y
Given: fX,Y(x, y) = 8xy 0 £ x £ y £ 1
•
fX(x) = Ú f X ,Y ( x, y) dy
-•
•
x=1
and fY(y) = Ú f X ,Y ( x, y) dx
Fig. 5.32
-•
From Fig. 5.32, we can find that in the shaded region x varies from 0 to y and y varies from x to 1.
1
fi fX(x) = Ú 8 xy dy = 4 x (1 - x 2 ) 0 £ x £ 1
x
y y
x2
fY(y) = Ú 8 xy dx = 8 y = 4 y3 0 £ y £ 1
0
2
0
• 1 1 1
x3 4 5 8
E[X] = Ú x f X ( x ) dx = Ú x 4 x (1 - x 2 ) dx = 4
3
-
5
x =
15
-• 0 0 0
• 1 1
4 5 4
E[Y] = Ú y f y ( y) dy = Ú 4 y 4 dy =
5
y =
5
-• 0 0
1 y 1
8 5 4
Ú Ú 8x y dx dy =
3 Ú0
y dy =
2 2
E[XY] =
0 0
9
1
2
Úy 4 y3 dy =
2
E[Y2] =
0
3
2 16 2
s2y = E[Y 2 ] = {E[Y ]}2 = - =
3 25 75
Cov (X, Y) = E[XY] – E[X] E[Y]
4 8 Ê 4ˆ 4
= - =
9 15 ÁË 5 ˜¯ 225
Cov ( X , Y ) 4/225 2
bxy = = =
s Y2 2/75 3
5.142 Probability Theory and Random Processes
Practice Problem
5.33 The joint pdf of two random variables X and Y is given by fX,Y (x, y) = 2 for 0 < x < y < 1 and zero otherwise. Find
the regression curve of X on Y. (Ans. 2x – y = 0)
Solved Problems
5.115 The regression lines Y on X and of X on Y are respectively x + 3y = 0 and 3x + 2y = 0. Find the
regression line of V on U where U = X + Y and V = X – Y.
Cov(X , Y ) -2
=
s y2 3
2s Y2
Cov(X, Y) = - (5.156)
3
From Eq. (5.155) and Eq. (5.156),
sX2 = 2sY2
2s Y2 s2
Also, E[XY] = Cov(X, Y) = - =- X
3 3
Given
U =X+Y
and V =X–Y
E[U] = E[X] + E[Y] = 0
E[V] = E[X] – E[Y] = 0
Cov (U, V) = E[UV] – E[U] E[V] = E[UV]
= E[(X + Y) (X – Y)] = E[X2] – E[Y2]
= sX2 – sY2 (Since E[X] = E[Y] = 0)
sU2 2
= E[U ] – {E[U]} 2
4s Y2 5s Y2
= 3sY2 – =
3 3
The regression line of V on U is
Cov (U , V )
(v - v ) = (u - u )
s U2
s X2 - s Y2
v= (u)
5 2
sY
3
2s Y2 - s Y2
= u
5 2
sY
3
5
fi v=u
3
= 3u - 5v = 0
5.144 Probability Theory and Random Processes
5.116 The two regression lines are 4x – 5y + 33 = 0 and 20x – 9y = 107 and variance of x = 25. Find the
mean of x and y. Also, find the value of x.
Solution Let the mean values of x and y be x and y respectively. The mean values x and y satisfy the
regression line equations.
Therefore we can write
4 x - 5 y = - 33
and 20 x - 9 y = 107
Solving for x and y , we get
x = 13 and y = 17
Let us assume that the equation of line of regression of x and y is
4x = 5y – 33
5 33
fi x= y- (5.157)
4 4
and the equation of line of regression of y on x
9y = 20x – 107
from which
20 107
y= x- (5.158)
9 9
From Eq. (5.157) and Eq. (5.158),
5 20
bxy = and bxy =
4 9
100
r2 = bxy byx =
4¥9
10
r = = 1.66 > 1
6
Therefore, our assumption is wrong.
Now we assume that the equation of line of regression of y on x.
4 33
y= x+
5 5
4
fi byx =
5
The equation of line of regression of x on y.
x = 9 y + 107
20 20
9
fi bxy =
20
Operations on Multiple Random Variables 5.145
4 Ê 9 ˆ 36
r2 = byx bxy =
5 ÁË 20 ˜¯ = 100
6
r = = 0.6
10
Practice Problems
5.34 The two regression lines (i) y on x is 7x – 16y + 9 = 0 and (ii) x on y is 5y – 4x – 3 = 0. Calculate x–, –y and correlation
coefficient.
Ê -3 15 ˆ
ÁË Ans. x = ; y= ; r = 0.734˜
¯
29 29
5.35 The two lines of regression are
8x – 10y + 66 = 0
40x – 18y – 214 = 0
The variance of x is 9. Find x–, –y and r. (Ans. 13, 17, 0.6)
Solved Problems
Solution Given:
1
fX,Y (x, y) = ( x + y), 0 £ x £ 1, 0£ y£2
3
• 2
1
fX (x) = Ú f X ,Y ( x, y) dy =
3 Ú0
( x + y) dy
-•
Ï 2
2¸
1 Ô 2 y Ô 1 2
= Ìx y 0 + ˝ = [2 x + 2 ]= (1 + x ); 0 £ x £ 1
3 ÔÓ 2 3 3
0Ô
˛
•
fY(y) = Ú f X ,Y ( x, y) dx
-•
1 È 1 ˘
1 1 x2 1 È1 ˘ 1 + 2y
= Ú ( x + y) dx = Í + y x 0 ˙ = Í + y˙ =
1
0< y<2
30 3Í2 ˙ 3 Î2 ˚ 6
Î 0 ˚
5.146 Probability Theory and Random Processes
• 1 È 1 1˘
2 2 x2 x3 ˙= 5
x = E[ X ] = Ú x f X ( x ) dx = Ú (1 + x ) x dx = Í +
-•
3 3Í2 3 ˙ 9
0 Î 0 0˚
• È 1 1˘
2 ÍÊ x 3 ˆ
1
2 x4 ˙
E[X2] = Ú x 2 f X ( x ) dx = Ú3 + = +
2
(1 x ) x dx Á ˜
-•
3 ÍË 3 ¯ 4 ˙
0 Î 0 0˚
2 È1 1 ˘ 7
= + =
3 ÍÎ 3 4 ˙˚ 18
2
7 Ê 5ˆ 13
sX2 = E[X2] – {E[X]}2 = - =
18 ÁË 9 ˜¯ 162
• 2 È 2 2˘
y(1 + 2 y) 1 y2 2 3 ˙
y = E{Y ] =
Ú y fY ( y) dy = Ú dy = Í + y
-•
6 6Í2 3 ˙
0 Î 0 0
˚
1 È 16 ˘ 11
= Í2 + 3 ˙ = 9
6 Î ˚
• 2 È 2 2˘
y 2 (1 + 2 y) 1 y3 1 4 ˙
E[Y2] = Ú y 2 fY ( y) dy = Ú dy = Í _ y
-•
6 6Í3 2 ˙
0 Î 0 0
˚
1 È8 ˘ 16
= + 8˙ =
6 ÍÎ 3 ˚ 9
2
16 Ê 11ˆ 23
sY2 = E[Y2] – {E[Y]}2 = - =
9 ÁË 9 ˜¯ 81
• •
E[XY] = Ú Ú xy f X ,Y ( x, y) dx dy
-• -•
2 1
1
=
3 Ú Ú xy( x + y) dx dy
0 0
È 2 2˘
1 Ê y y2 ˆ
2
1 1 y2 1 y3
= Ú Á + ˜ dy = Í + ˙
3 0Ë 3 2¯ 3 Í3 2 2 3 ˙
Î 0 0˚
1 È2 4˘ 2
= + =
3 ÍÎ 3 3 ˙˚ 3
We know
Cov ( X , Y )
r =
s X sY
Operations on Multiple Random Variables 5.147
11 Cov ( X , Y ) Ê 5ˆ
y- = ÁË x - 9 ˜¯
9 s X2
- 1/18 Ê 5ˆ
=
13/162 ÁË x - 9 ˜¯
11 -2 Ê 5ˆ
y- = Áx - ˜
9 13 Ë 9¯
REVIEW QUESTIONS
15. Define the expectation of a function of random variables.
16. Prove that the mean value of a weighted sum of random variables is equal to the weighted sum of
mean values.
17. Define joint moments about the origin.
18. Define correlation.
19. Explain joint central moment in detail.
20. Define covariance and explain the properties of covariance.
21. Explain how you get the pdf of a single function of several random variables.
22. Explain how you obtain the pdf of several functions of several random variables.
23. Define correlation coefficient.
24. Give expression for pdf of two jointly Gaussian random variables.
25. What are the properties of Gaussian random variables?
5.148 Probability Theory and Random Processes
EXERCISES
Problems
1. If X and Y are independent variables such that
1
X = –1 with a probability
4
1
= 0 with a probability
4
1
= 1 with a probability
2
1
and Y = –1 with a probability
3
1 Ê 37 ˆ
= 0 with a probability
4 ÁË Ans : (a) 24 ; (b) 0˜¯
5
= 1 with a probability
12
Find (a) E[X2 + 2XY + Y2] (b) E[X2 – Y2]
2. Two statistically independent random variables X1 and X2 have mean value mX1 = 2, mX2 = 5. Find
the mean values of
(a) g(X1, X2) = X12 + X1X2
(b) g(X1, X2) = X1 + X2 (Ans: (a) 14; (b) 7)
3. Two random variables X and Y have joint pdf
1
fX,Y(x, y) = 0 < x < 6; 0<y<4
24
Find all third-order moments of X and Y. (Ans: 54, 16, 16, 24)
4. Statistically independent random variables X and Y have moments m10 = 5; m20 = 25; m02 = 6;
m11 = 10. Find the moment m22. Ans: Zero
5. The joint pdf of the random variables X and Y is defined as
ÔÏ8 e -4 y 0 < x < 0.5 y > 0
fX,Y(x, y) = Ì
ÔÓ0 otherwise
Ê 1ˆ
Find m22. ÁË Ans : 96 ˜¯
Operations on Multiple Random Variables 5.149
6. Two random variables X and Y have means 2 and 0 respectively and variances 4 and 9 respectively.
Their correlation coefficient is 0.6. New random variables W and V are defined as
W = X – 2Y and V = X + Y.
Find (a) mean, (b) variance and (c) correlation coefficient of V and W.
7. If the joint pdf of a random variables X and Y is given by fX,Y(x, y) = 2 – x – y, in 0 £ x £ y < 1, find
E(X) and E(Y).
Ê 5 5ˆ
ÁË Ans : 12 , 12 ˜¯
8. Show that Cov (aX, bY) = ab Cov (X, Y).
9. X and Y are independent random variables with variance 2 and 3. Find the variance of 3X + 4Y.
(Ans. 66)
10. The random variables X and Y have joint pdf given by
Ï4 xy, 0 < x < 1, 0 < y < 1
fX,Y(x, y) = Ì
Ó0 otherwise.
Ê 2W 2 ˆ
Find the joint pdf of V = X2 and W = XY. ÁË Ans : fV ,W (v, w) = V ; w < v < 1; 0 < w < 1˜¯
11. The two random variables X and Y jointly distributed with density function fX,Y(x, y) = x + y, 0 £ x <
1, 0 £ y £ 1. Find Cov(X, Y) and correlation coefficient.
Ê 1 1ˆ
ÁË Ans : - 144 , - 11˜¯
12. If X is a random variable with E[X] = 1 and E[X(X – 1)] = 4
Find Var(X) and Var(2 – 3X). (Ans. 4, 36)
13. If X and Y are random variables with sX2 = 2, sY2 = 4 and covariance Cov(X, Y) = –2, find the
variance of the random variables Z = 3X – 4Y + 8. (Ans. 130)
14. The joint pdf of two random variables X and Y
Ï xy / 96 0 £ x £ 4, 1 £ y £ 5
fX,Y(x, y) = Ì
Ó0 elsewhere
Find Var(2X + 3Y).
15. Given joint pdf of X and Y
Ï4.8 y (2 - x ) 0 < x £ 1, 0 < y < x
fX,Y(x, y) = Ì
Ó0 elsewhere
Find the correlation, and correlation coefficient. Are X and Y are independent?
16. Show that X and Y are independent random variables if and only if r = 0.
17. Let X and Y be independent Cauchy random variables with parameters 4 and 9 respectively. Let
Z = X + Y.
(a) Find the characteristic function of Z.
(b) Find the pdf of Z.
18. X and Y are two independent random variables with parameters 1 and 4 respectively. Let
Z = X + Y.
(a) Find the characteristic function of Z
(b) Find pdf of Z from the characteristic function found in part (a).
5.150 Probability Theory and Random Processes
19. Given Z = X – 2Y, where X and Y are two independent random variables.
(a) Find the characteristic function of Z.
(b) Find the mean and variance of Z from the characteristic function.
20. The joint pdf of two random variables X and Y is given by
1
fX,Y(x, y) = ; x ≥ 1, y ≥ 1
x 2 y2
Find the joint pdf of U = XY, V = X/Y.
21. Prove that Cov(X + Y, Z) = Cov(X, Z) + Cov(Y, Z).
22. If X and Y are identically distributed, not necessarily independent, show Cov(X + Y, X – Y) = 0.
23. Prove that
E[(X – Y)2] = E[X2] – E[Y2]
if Y = E[X/Z]
24. Discrete random variables X and Y have the joint density
fX,Y(x, y) = 0.2(d(x + 2) d(y – 1) + 0.25 d(x + 1) d(y)
+ 0.15 d(x) d(y + 1) + 0.3 d(x – 2) d(y – 2).
+ 0.1 d(x – 3) d(y)
Find m01, m10, m11, m20, m02, correlation coefficient.
25. Two Gaussian random variables X1 and X2 are defined by the mean and covariance matrices
È1 ˘ È 1 -3˘
[ X ] = Í ˙ : [C X ] = Í ˙
Î ˚
0 Î-3 2 ˚
Two new random variables Y1 and Y2 are formed using the transformation
È 1 -1˘
[T ] = Í ˙
Î-1 1 ˚
Find the matrices (a) [Y ] , and (b) [C ]. (c) Also find rY1 Y2 .
Y
26. Zero-mean Gaussian random variables X1, X2 and X3 having a covariance matrix
È 1 -1 2 ˘
[CY] = Í-1 1 -1˙
Í ˙
ÎÍ 2 -1 1 ˙˚
are transformed to new variables
Y1 = X1 + X2 – X3
Y2 = X1 + 2X2 + 2X3
Y3 = 2X1 – X2 + 2X3
Find the covariance matrix of Y1, Y2 and Y3
27. The joint pdf of random variable X and Y is given by
1 - (2 x 2 + y2 /2)
fX,Y(x, y) = e for all x, y
2p C
Ê 1 ˆ
Find Var(X), Var(Y) and Cov (X, Y). ÁË Ans. , 1, 0˜¯
4
Operations on Multiple Random Variables 5.151
28. X and Y are independent random variables with fX(x) = e–xu(x) and fY(y) = 3e–3yu(y). Find the density
3
of Z = X . (Ans. fz(z) = )
Y ( z + 3)2
29. If X and Y are independent random variables with density function fX(x) = 1 for 1 £ x £ 2 and fY(y) =
y
, 2 £ y £ 4, find the density of Z = XY.
6
Ê Ï 1 ˆ
Á Ô 6 ( z - 2) for 2 £ z £ 4˜
Ô
Á Ans : fZ ( z ) = Ì ˜
Á Ô 1 Ê 4 - z ˆ for 4 £ z £ 8˜
ÁË Á 2 ˜¯ ˜¯
ÓÔ 6 Ë
30. The pdf of random variables X and Y are fX(x) = 2e–2x, x > 0 and fY(y) = 2e–2y for y > 0. Find the pdf
of Z = X + Y.
Ê ÏÔ e2 z for z < 0ˆ
Á Ans : fZ ( z ) = Ì -2 z ˜
ÁË for z > 0˜¯
ÓÔe
31. Given the joint pdf of two random variables X and Y
Ï1
Ô (1 + xy) 0 £ x £ 2, 0 £ y £ 4
fX,Y(x, y) = Ì 24
ÔÓ elsewhere
Ï1 u
ÔÔ 2 e for u < 0
(Ans. fU (u) = Ì
Ô 1 e-u for u > 0
ÔÓ 2
36. If X and Y are independent random variables, each following N(0, 2), find the pdf of Z = 2X + 3Y
1 2
fZ(z) = e- z /104
4 13p
5.152 Probability Theory and Random Processes
46. Calculate the correlation coefficient and the lines of regression from the following data.
x 62 64 65 69 70 71 72 74
y 126 125 139 145 165 152 180 208
(Ans: r = 0.9)
47. If x = 970, y = 18, s x = 38, s y = 2 and r = 0.6 , find the line of regression of x and y.
(Ans: 11.4y + 764.8 = x)
48. The regression equations are 3x + 2y = 6 and 6x + y = 31. Find the correlation coefficient between X
and Y.
49. Prove that
a1a2
r(a1X + b, a2Y + b2) = r (X, Y )
a1 a2
where a1 π 0, b1 and b2 are constants.
Multiple-Choice Questions
1. Two random variables X and Y have mean and variance as 2, 4 and 0, 9 respectively. Then the mean
and variance of Z = X + 3Y are
(a) 2, 31 (b) 2, 85 (c) 2, 247 (d) 4, 85
2. Two random variables X and Y are related as Y = 4X + 9. The correlation coefficient between X and
Y is
(a) 1 (b) 0.8 (c) 0.6 (d) 0.5
3. Cov (aX, bY) =
a
(a) a2b2 Cov(X, Y) (b) ab Cov(X, Y) (c) Cov ( X , Y ) (d) (a + b) Cov (X, Y)
b
4. If Var (X1) = 5, Var(X2) = 6 and E(X1) = 0; E[X2] = 0. Cov(X1, Y2) = 4. Then Var(Y) given Y = 2X1
– 3X2 is
(a) 20 (b) 26 (c) 25 (d) 24
5. If fX,Y(x, y) = 2x, 0 £ x £ y £ 1, then E[XY] =
1 2 1 4
(a) (b) (c) (d)
15 15 5 15
6. Cov(X + Y, X – Y) =
(a) Var(X) + Var(Y) (b) Var(X) – Var(Y) (c) Var(X2 – Y2) (d) Var(X2 + Y2)
7. Two random variables X and Y have joint pdf
Ï xy
Ô 0 < x < 4, 1 < y < 5
fX,Y(x,y) = Ì 96
Ô0 otherwise
Ó
The value of E[XY] is
248 8 31
(a) (b) 8 (c) (d)
27 9 9
8. The joint pdf of two random variables X and Y is
fX,Y(x, y) = 0.2 d (x – a) d(y – a) + 0.4 d(x + a) d(y – 3) + 0.35 d(x – 1) d(y – 1)
+ 0.05 d (x – 2) d (y – 2)
5.154 Probability Theory and Random Processes
Ê
-1 1 - r
2
s X + sY ˆ Ê 1 - r2 s X sY ˆ
(c) tan Á (d) tan -1 Á
Ë r s X s Y ˜¯ Ë r s X + s Y ˜¯
sX
12. Given U = X + KY and V = X + Y . If rUV = 0, the value of K is
sY
sX sX sY sY
(a) - (b) (c) (d) -
sY sY sX sX
13. The random variables X, Y and Z are uncorrelated and have same variance. The correlation between
(X + Y) and (Y + Z) is
1 1
(a) (b) (c) –1 (d) 1
2 8
14. The line of regression of x on y is
sX sY
(a) x - x = r (y - y) (b) y - y = r (x - x )
sY sX
sY sX
(c) x - x = r (y - y) (d) y - y = r (x - x )
sX sY
15. Two random variables X and Y have the joint pdf given by
fX,Y (x, y) = x + y, 0 £ x £ 1, 0 £ y £ 1
The correlation coefficient is
1 1 1 1
(a) - (b) (c) (d) -
11 144 11 144
16. Two lines of regression coincide if and only if
1 1
(a) rxy = 0 (b) r xy = ± (c) r xy = ± 1 (d) r xy = ±
2 2
Operations on Multiple Random Variables 5.155
Y
17. The two regressions for random variables X and Y are Y = 2X and X = . The correlation coefficient
is 8
1 3
(a) (b) (c) 1 (d) 0
2 4
18. The joint characteristic function of two random variables X and Y is given by
2
- 8w 22
f X ,Y (w1 , w 2 ) = e -2w1
The mean value of X is given by
(a) 1 (b) 0 (c) –1 (d) 2
19. The joint pdf of two random variables X and Y is
fX,Y (x, y) = kxy
The value of E[X2 + Y2) is
(a) 0 (b) 1 (c) 2 (d) 3
1 -x
20. The pdf of a random variable X is fX(x) = e - • < x < •
2
The variance of X is given by
1 1
(a) (b) (c) –1 (d) 1
4 2
21. The random variable X has the pdf
1
1 - x2 + x -
fX(x) = e 4
p
The mean and variance are given by
1 1 1 1 1
(a) (b) 1, (c) , (d) 0,
2 2 2 2 2
22. Two random variables are related by
Y = aX + b
The covariance is given by
(a) as X2 (b) a 2s X2 (c) 2 as X2 (d) 0.5 as X2
23. Two random variable X and Y are independent with moments m10 = 2, m20 = 14 and m11 = –6. The
value of m22 is
(a) 10 (b) 20 (c) 30 (d) 40
24. If X and Y are independent random variables with U(0, 1) the distribution of U = XY is
(a) –log u, 0 £ u £ 1 (b) log u; 0 £ u £ 1 (c) e–u 0 £ u £ 1 (d) eu; u < 0
25. The joint pdf of two random variables X and Y is
Ï x + y 0 £ x £ 1, 0 £ y £ 1
fX,Y(x, y) = Ì
Ó0 elsewhere
5.156 Probability Theory and Random Processes
INTRODUCTION 6.1
In chapters 1 and 5, we studied in detail about probability theory. In those chapters, we were concerned with
outcomes of random experiments and the random variable used to represent them. We know that a random
variable is a mapping of an event s Œ S where S is the sample space to some real number X(s). The random-
variable approach is applied to random problems that are not functions of time. However, in certain random
experiments, the outcome may be a function of time. Especially in engineering, many random problems
are time dependent. For example, speech signal which is a variation in voltage due to speech utterance is
a function of time. Similarly, in a communication system, a set of messages that are to be transmitted over
a channel is also a function of time. Such time functions are called random processes. In communication
systems, a desired deterministic signal is often accompanied by undesired random waveform known as noise
which limits the performance of the system. Since the noise is a function of time and cannot be represented
by a mathematical equation, it can be treated as a random process. In this chapter, we will study such random
processes which may be viewed as a collection of random variables with t as a parameter. That is, instead of
a single number X(s), we deal with X(s, t) where t Œ T and T is called the parameter set of the process. For a
random process X(s, t), the sample space is a collection of time function. Figure 6.1 shows a few members of
collection. A realization of X(s, t) is a time function, also called a sample function and member function.
1
X(s1, t)
0.5
0
–5 0 5 t 10 15 20
1
X(s2, t)
0.5
0
–5 0 5 t 10 15 20
1
X(s3, t)
0.5
0
–5 0 5 t 10 15 20
1
X(s4, t)
0.5
0
–5 0 5 t 10 15 20
Fig. 6.1 Few members of collection of a random process
6.2 Probability Theory and Random Processes
A random process becomes a random variable when time is fixed at some particular value. For example, if
t is fixed to a value t1 then the value of X(s, t) at t1 is a random variable X(s, t1). On the other hand, if we fix
the sample points, X(t) is some real function of time. Thus, for a fixed S, X(s, t) can be viewed as a collection
of time functions.
1. X(s, t1) is a random variable for a fixed time.
2. X(s1, t) is a sample realization for any points s, in the sample space S.
3. X(s1, t1) is a number.
4. X(s, t) is a collection of realization and is called a random process.
Example 1 Consider a random experiment in which we toss a dice and observe the dots on the top face.
We know that the possible outcomes of this random experiment are as shown in Fig. 6.2(a) and these out-
comes are known as sample space. For these outcomes, let us assign the following time functions shown in
Fig. 6.2(b).
The set of the waveforms {x1(t), x2(t), … x6(t)} represents the random process.
Example 2 In communication systems, the carrier signal is often modelled as a sinusoid with random
phase which is given by
x(t ) = A cos(2p f t + q )
The phase q is a random variable with uniform distribution between –p and p. The reason for using
random phase is that the receiver does not know the time when the transmitter was turned ON or the distance
from the transmitter to the receiver. Figure 6.3 shows the realization of the process.
Random Processes—Temporal Characteristics 6.3
Example 3 A random telegraph signal X(t) takes the possible states either 0 or 1. The time interval in
which the signal remains in one of the possible states is an experimental random variable. Figure 6.4 shows
one possible realization for the telegraph signal. Initially, at time t = 0, the signal is in the zero state for a
time interval of T1 and switches to the state 1. The signal remains in that state for a time interval T2 and then
switches to zero state again. The process of switching between two states continues depending on the time
interval specified by the sequence of exponential random variables.
X(t)
1
T2
0 t
T1
2
1
X(s1, t)
0
–1
–2
0 5 10 15 20 25
t
2
X(s2, t)
0
–2
–2
0 5 10 15 20 25
t
2
X(s3, t)
2
0
–2
0 5 10 15 20 25
t
Fig. 6.5 A continuous random process
Let us consider the amplitude of A of the sine wave X(t) = A sin (wt + f). If A takes any value in the
interval [–2, 2] then the resulting process is a continuous random process X (s, t) given by
X(s, t) = s cos (2pt)
The realization of this process is shown in Fig. 6.6.
1.5
1
0.5
X(t)
0
–0.5
–1
–1.5
–10 –5 0 5 10 15 20 25 30 35
t
Fig. 6.6 Sine wave random process with A as random variable
6.2.2 Discrete Random Process
A discrete random process is one in which the random variable X assumes discrete values while t is continuous.
The random opening and closing of a switch that connects either 0 or 50 volts is a sample function of a
discrete random process.
A random telegraph process is an example for discrete random process. At any time instant, the random
telegraph signal takes one of two possible states, either 0 or 1. Figure 6.7 shows one possible realization
of random telegraph signal in which the signal takes the value 1 for the time interval T1 and 0 for the time
interval T2.
1
X1(n)
0.5
0
0 5 10 15 20 25
n
1
X2(n)
0.5
0
0 5 10 15 20 25
n
X(k)
DETERMINISTIC AND
NON-DETERMINISTIC PROCESS 6.3
A random process is said to be a non-deterministic random process if future values of any sample function
cannot be exactly predicted from the observed fast values. Almost all natural random processes are non-
deterministic.
A random process is said to be deterministic if future values of any sample function can be predicted from
past values.
6.6 Probability Theory and Random Processes
REVIEW QUESTIONS
1. Define random process with suitable examples.
2. What are the classification of random process?
3. Explain deterministic and non-deterministic processes.
4. Give some examples for the following random processes.
(a) Continuous random process
(b) Discrete random process
(c) Continuous random sequence
(d) Discrete random sequence
5. What is the difference between random sequence and random process?
Solved Problems
6.1 A two-level semirandom binary process is defined as X(t) = 1 or 0 for (n – 1)T < t < nT where the
levels 1 and 0 occur with equal probability, T is a positive constant and n = 0, ±1, ±2…
(a) Sketch a typical sample function.
(b) Classify the process.
(c) Is the process deterministic?
Solution
(a) Given X(t) = 0 for (n – 1)T < t < nT. A sample function is shown in Fig. 6.10.
(b) This is a discrete random process.
(c) The process is not deterministic.
6.2 A random process X(t) is defined as X(t) = A where A is a continuous random variable uniformly
distributed on (0,1).
(a) Sketch any two sample functions.
(b) Classify the process.
(c) Is it deterministic?
Solution
(a) A is a uniform random variable on (0,1). Two sample functions for different values of A are shown
in Fig. 6.11.
Random Processes—Temporal Characteristics 6.7
X(t)
X(t)
0.4 0.52
t t
0 0
Fig. 6.11
6.3 Repeat solved problem (6.2) for X(t) = At where A ~ U(0, 1).
Solution
(a) Here, A represents the slope of a straight line passing through the origin. Two different sample
functions are shown in Fig. 6.12.
x(t) x(t)
0.5(t)
t/3
t t
Fig. 6.12
Practice Problem
6.1 A discrete-time random process is defined as follows. A coin is tossed. If the outcome is heads than Xn = (–1)n and
if the outcome is tails Xn = (–1)n+1 for all n. Sketch some sample functions of the process.
If any of the probability density functions do change with the choice of time origin, then the process is
non-stationary. For such processes, the mean value and variance also depend on time.
and f X ( x, t ) = f X ( x, t + t ) (6.15)
6.10 Probability Theory and Random Processes
When a random process satisfies the above condition, the mean value of the process does not change with
time shift. The mean value of the random process X(t) is
•
E[X(t1)] = Ú x1 f X ( x1; t1 )dx1 (6.16)
-•
Similarly, the mean value of the random variable X(t2) is
•
E[X(t2)] = Ú x1 f X ( x1; t2 )dx1 (6.17)
-•
Since t2 = t1 + t, we have
•
E[X(t2)] = Ú x1 f X ( x1; t1 + t )dx1
-•
Using Eq. (6.15), we can write
•
E[X(t2)] = Ú x1 f X ( x1; t1 ) dx1 (6.18)
-•
= E[X(t1)]
That is, the mean value of the random process is constant and does not change with a shift in time origin.
= RXX(t) (6.23)
The second-order stationary process is also known as Wide Sense Stationary (WSS), process.
Definition A random process is Wide-Sense Stationary if the mean function and autocorrelation function
are independent to time shift.
That is,
mX(t) = mX = Constant (6.24)
RYY(t, t + t) = RXX(t) (6.25)
A strict-sense stationary process is also WSS provided that mean and autocorrelation functions exist.
Thus, the WSS process does not necessarily be stationary in the strict-sense.
REVIEW QUESTIONS
6. Define distribution and density functions.
7. Define the mean and auto corretation function of a random process.
8. What is meant by first-order and second-order stationary?
9. When are the random processes are said to be statistical independence?
10. Define second order and a wide-sense stationary process. When the random process is said to be
jointly wide-sense stationary. Give an example for WSS process?
11. What is the difference between a SSS and WSS process?
12. When are the processes X(t) and Y(t) said to be jointly stationary in the wide sense?
Solved Problems
6.4 In the fair-coin experiment, a random process X(t) is defined as follows: X(t) = cos p t if heads occur,
X(t) = t if tails occur. (a) Find E[X(t)]. (b) Find FX(x, t) for t = 0.25, 0.5,1.
Solution The random process X(t) takes the values cos pt and t with same probability 1/2. Therefore,
E[X(t)] = Â pX ( x) Xi (t ) = 12 cos p t + 12 (t )
i
= 0.5 cos pt + 0.5t
6.12 Probability Theory and Random Processes
1
X(t, heads) = cos p t = for t = 0.25
2
=0 for t = 0.5
= –1 for t =1
X(t, tails) = t = 0.25 for t = 0.25
= 0.5 for t =0.5
=1 for t =1
The distribution functions FX(x, t) are shown in Fig. 6.13.
1 1 1
0 1 1 0 0.5 –1 0 1
4 2
Fig. 6.13
6.5 A random process X(t) is determined by tossing two dice to determine which of the following
sinusoids to pick.
Ï x1 (t ) = sin t if the sum is 3 or 7
Ô
X (t ) = Ì x2 (t ) = cos 2 t if the sum is 2 or 12
Ô x (t ) = sin 2t otherwise
Ó 3
Êpˆ Êpˆ Êpˆ Êpˆ
Find m X Á ˜ , m X Á ˜ , s X2 Á ˜ , s X2 Á ˜ ,
Ë 4¯ Ë 2¯ Ë 4¯ Ë 2¯
Solution In throwing two dice experiments, the number of possible outcomes is 36.
The sum of two dice equal to three can be obtained in two ways. That is, (1, 2) and (2, 1).
The sum of two dice equal to seven can be obtained in six ways. They are (1,6), (2,5), (3,4), (4,3), (5,2),
(6,1). Therefore, the probability
2 6 2
P[ X (t ) = x1 (t )] =
+ =
36 36 9
The only way that the sum of two dice is equal to 2 is (1, 1).
The only way that the sum of two dice is equal to 12 is (6, 6).
1 1 1
Therefore, the probability P[ X (t ) = x2 (t )] = + =
36 36 18
2 1 13
P[ X (t ) = x3 (t )] = 1 - - =
9 18 18
Random Processes—Temporal Characteristics 6.13
n
mX(t) = E[ X (t )] = Â xi (t ) P[ X (t ) = xi (t )]
i =1
3
= Â xi (t ) P [ X (t ) = xi (t )]
i =1
= x1 (t ) P[ X (t ) = x1 (t )] + x2 (t ) P[ X (t ) = x2 (t )] + x3 (t ) P[ X (t ) = x3 (t )]
2 2 1 13
= sin (t ) + cos2 2(t ) + sin 2 2(t )
9 18 18
È Ê p ˆ˘ 2 Êpˆ 1 Ê p ˆ 13 Êpˆ
E Í X 2 Á ˜ ˙ = sin 2 Á ˜ + cos2 2 Á ˜ + sin 2 2 Á ˜
Î Ë 4 ¯ ˚ 9 Ë 4 ¯ 18 Ë 4 ¯ 18 Ë 4¯
2 Ê 1ˆ 1 13
= + (0) + (1)
9 ÁË 2 ˜¯ 18 18
5
=
6
È Ê p ˆ˘ 2 2Êpˆ 1 2 Ê p ˆ 13 2 Êpˆ
E Í X 2 Á ˜ ˙ = sin Á ˜ + cos 2 Á ˜ + sin 2 Á ˜
Ë ¯
2 ˚ 9 Ë 2 ¯ 18 Ë 2 ¯ 18 Ë 2¯
Î
2 1 13
= + ( -1)2 + (0)
9 18 18
5
=
18
2
Êpˆ È Êpˆ ˘ È Êpˆ ˘
s X2 Á ˜ = E Í X 2 Á ˜
Ë 4¯ ˙ - Í m X ÁË ˜¯ ˙
Î Ë 4¯ ˚ Î 4 ˚
6.14 Probability Theory and Random Processes
5
= - (0.8793)2
6
= 0.0602
2
Êpˆ È Êpˆ ˘ È Êpˆ ˘
s X2 Á ˜ = E Í X 2 Á ˜ ˙ - Í m X ÁË ˜¯ ˙
Ë 2¯ Î Ë 2¯ ˚ Î 2 ˚
5 1
= -
18 36
1
=
4
6.6 If X(t) = Y cos t + Z sin t for all t where Y and Z are independent binary random variables, each of
2 1
which assumes the values –1 and 2 with probabilities and respectively. Prove that {X(t)} is wide-
sense stationary. 3 3
Solution Given Y and Z are independent binary random variables which assumes the values –1 and 2
2 1
with probability and respectively. We can write
3 3
y –1 2
pY(y) 2 1
3 3
z –1 2
pZ(z) 2 1
3 3
From which,
Ê 2ˆ Ê 1ˆ
E[Y] =  y p ( y ) = (-1) ÁË 3 ˜¯ + 2 ÁË 3 ˜¯
i
i Y i =0
Ê 2ˆ Ê 1ˆ
E[Z] =  zi pZ (zi ) = (-1) ÁË 3 ˜¯ + 2 ÁË 3 ˜¯ =0
i
Similarly,
Ê 2ˆ Ê 1ˆ
E[Y2] =  yi2 pY ( yi ) = (-1)2 ÁË 3 ˜¯ + (2)2 ÁË 3 ˜¯
i
Ê 2ˆ Ê 4ˆ
= Á ˜ +Á ˜ = 2
Ë 3¯ Ë 3¯
Random Processes—Temporal Characteristics 6.15
Ê 2ˆ Ê 1ˆ
E[Z2] =  zi2 pZ (zi ) = (-1)2 ÁË 3 ˜¯ + (2)2 ÁË 3 ˜¯
i
Ê 2ˆ Ê 4ˆ
= Á ˜ +Á ˜ = 2
Ë 3¯ Ë 3¯
Since Y and Z are independent, E[YZ] = E[Y] E[Z] = 0
E[X(t)] = E[Y cos t + Z sin t]
= E[Y] cos t + E[Z] sin t
=0
RXX(t, t + t) = E[X(t) X(t + t)]
= E[(Y cos t + Z sin t )(Y cos(t + t ) + Z sin(t + t )]
= E[Y 2 cos t cos(t + t ) + YZ sin t cos(t + t ) + YZ cos t sin(t + t ) + Z 2 sin t sin(t + t )
= E[Y 2 ]cos t cos(t + t ) + E[YZ ] sin t cos(t + t )
+ E[YZ ]cos t sin(t + t ) + E[ Z 2 ]sin t sin(t + t )
We know E[YZ ] = 0 and E[Y 2 ] = E[ Z 2 ] = 2
RXX (t , t + t ) = 2 cos t cos(t + t ) + 2 sin t sin(t + t )
= 2[ cos t cos(t + t ) + sin t sin(t + t )]
= 2 cos (t + t – t)
= 2 cos t
Hence, X(t) is a WSS process.
Practice Problems
6.2 If X(t) = cos (wt + f) where f is uniformly distributed in (–p, p), show that X(t) is stationary in wide-sanse.
Ê cos wt ˆ
ÁË Ans : R XX (t , t + t ) = 2 ˜¯
6.3 If X(t) = A cos wt + B sin wt, where A and B are independent random variables and w is a constant, then prove that
X(t) is a WSS.
(a) (b)
(c) (d)
(e)
We can easily observe from the figure that the correlation between the waveforms (b) and (c) are strong
but negative. The correlation between (a) and (d) is also strong but negative. The correlation between the
waveform shown in Fig. 6.14(e) with other waveforms is very weak. The correlation between the waveform
(a) and (c), (b) and (d) are medium and positive.
Here, gxx(t) measures the similarity between a signal and its time-shifted version.
The cross-correlation function of two deterministic power waveforms x(t) and y(t) is defined as
T
1
gxy(t) = lim
T Æ• 2T
Ú x(t ) y(t + t )dt (6.28)
-T
Here gxy measures the similarity between x(t) and shifted y(t).
1. Autocorrelation function is bounded by its value at the origin. That is, |RXX(t)| £ RXX(0). In other
words, autocorrelation function RXX(t) is maximum at t = 0. Consider the inequality
E{[ X (t + t ) ± X (t )]2 } ≥ 0
E[ X 2 (t + t ) ± 2 X (t + t ) X (t ) + X 2 (t )] ≥ 0
fi RXX (0) ± 2 RXX (t ) + RXX (0) ≥ 0
2 RXX (0) ± 2 RXX (t ) ≥ 0
fi 2 RXX (0) ≥ ±2 Rxx (t )
RXX (0) ≥ RXX (t )
or RXX (t ) £ RXX (0) (6.30)
2. Autocorrelation function RXX(t) is an even function, that is
RXX (t ) = RXX (-t ) (6.31)
RXX (t ) = E[ X (t ) X (t + t )]
RXX (-t ) = E[ X (t ) X (t - t )]
= E[ X (t + t ) X (t )]
= E[ X (t ) X (t + t )] = RXX (t )
3. The autocorrelation function RXX(t), at t = 0 is equal to mean square value. That is,
RXX (0) = E[ X 2 (t )]
RXX (t ) = E[ X (t ) X (t + t )]
RXX (0) = E[ X (t ) X (t )] = E[ X 2 (t )] (6.32)
The mean square value is also known as average power of the process.
4. If X(t) has no periodic components and is ergodic, and mX π 0, then
lim RXX (t ) = ( m X )2 (6.33)
t Æ•
RXX (t ) = E[ X (t ) X (t + t )]
If X(t) has no periodic component then X(t) and X(t + t) are uncorrelated as t Æ •. Therefore, we
can write
lim E[ X (t ) X (t + t )] = E[ X (t )]E[ X (t + t )]
t Æ•
E[ X (t )] = E[ X (t + t )] = m X
Therefore,
lim RXX (t ) = m X2
t Æ•
Consider
RXX(t ± T) = E[X(t) X(t + t ± T)]
Since X(t) is periodic with T, we can write
X(t + t ± T) = X(t + t)
fi RXX(t ± T) = E[X(t) X(t + t)] = RXX(t)
Since RXX(t ± T) = RXX(t)
RXX(t) is periodic.
7. If X(t) has a dc component then RXX(t) will have a dc component.
Thus, if X(t) = k + N(t) where k is a constant then RXX(t) = k2 + RNN(t)
X(t) = k + Nt
RXX(t) = E[X(t) X(t + t)]
= E[{k + N (t )}{k + N (t + t )}]
= E[ k 2 + kN (t ) + kN (t + t ) + N (t ) N (t + t )]
= k 2 + kE[ N (t )] + kE[ N (t + t )] + E[ N (t ) N (t + t )]
If E[N(t)] = 0, then RXX(t) = k2 + RNN(t)
8. The autocorrelation function RXX(t) cannot have an arbitrary shape.
9. For a random process, its autocorrelation function (if exists) is uniquely determined by the joint
probability density function. But the pdf cannot, in general, be determined uniquely by the ACF.
10. If X(t) is a differentiable random process then the autocorrelation function of the derivative of the
random process X(t) is the negative second-order derivative of that of X(t)
d
RXX (t ) = E[ X (t ) X (t + t )] = R (t )
dt XX
(6.34)
d2
RXX (t ) = E[ X (t ) X (t + t )] = - 2 RXX (t )
dt (6.35)
(for proof, refer Solved Problem (6.60).
11. An additive deterministic term of a random process has no effect on its autocovariance.
Let X(t) be a WSS process and x(t) be a deterministic function. Then
Y (t ) = X (t ) + x (t )
mY = E[Y (t )] = x(t ) + E[ X (t )] = x(t ) + m X (6.36)
RXY (t ) = E[ X (t )]E[Y (t + t )]
= E[ X (t )]E[Y (t )]
6. If X(t) and Y(t) are orthogonal random processes, then RXY(t, t + t) = 0.
REVIEW QUESTIONS
13. State and prove the properties of autocorrelation functions.
14. Prove that the autocorrelation function is an even function.
15. Prove that autocorrelation function value at t = 0 is equal to mean square value.
16. Prove that |RXX(t)| £ RXX(0).
If X(t) has no periodic components and is eragodic then prove that lim RXX (t ) = ( m X ) .
2
17.
t Æ•
18. Prove that if X(t) has a periodic component then RXX(t) will have a periodic component with the
same period.
19. Prove that if X(t) has a dc component then RXX(t) will have a dc component.
20. If X(t) is differentiable then prove that
d
RXX (t ) = R (t )
dt XX
d2
RXX (t ) = - RXX (t )
dt 2
21. Prove that an additive deterministic term of a random process has no effect on its autocovariance.
22. Prove that a multiplicative deterministic term of a random process acts as a scaling factor for its
covariance.
23. Consider a random process Y(t) = x(t) + X(t) where x(t) is a deterministic signal and is X(t) is WSS
process. Prove that Y(t) is not stationary if x(t) is time-varying (refer solved problem 6.54).
24. Explain the properties of cross-correlation function.
Solved Problems
E[{Y 2 (t + t ) + a 2 X 2 (t ) + 2a X (t )Y (t + t )}] ≥ 0
E[Y 2 (t + t )] + a 2 E[ X 2 (t )] + 2a E[ X (t )Y (t + t )] ≥ 0 (6.42)
Solution From the solved problem 6.7, we found that RXY (t ) £ RXX (0) RYY (0)
Since geometric mean of two positive numbers cannot exceed their arithmetic mean, we can write
RXX (0) + RYY (0)
RXX (0) RYY (0) £
2
RXX (0) + RYY (0)
Therefore, we can write RXY (t ) £ (6.45)
2
X (t + e ) - X (t ) Y (t + e ) - Y (t )
Solution We know X (t ) = lim and Y (t ) = lim
e Æ0 e e Æ0 e
RXY (t ) = E[ X (t )Y (t + t )]
È Ï Y (t + t + e ) - Y (t + t ) ¸˘
= E Í X (t ) lim Ì ˝˙
Î e Æ 0 Ó e ˛˚
6.22 Probability Theory and Random Processes
Ï E[ X (t )Y (t + t + e )] - E[ X (t )Y (t + t )] ¸
= lim Ì ˝
e Æ0 Ó e ˛
Ï RXY (t + e ) - RXY (t ) ¸ d
= lim Ì ˝= RXY
e Æ0 Ó e ˛ dt
RXY (t ) = E[ X (t )Y (t + t )]
Ê X (t + e ) - X (t ) ˆ
= E[lim Á ˜¯ Y (t + t )]
e Æ0 Ë e
È Ê X (t + e )Y (t + t ) - X (t )Y (t + t ) ˆ ˘
= E Í lim Á ˜¯ ˙
ÎÍ e Æ 0 Ë e ˚˙
È R (t - e ) - RXY (t ) ˘ d
= lim Í XY ˙=- R (t )
e Æ0 Î e ˚ dt XY
d Èd ˘ d2
=- Í RXY (t )˙ = - 2 RXY (t )
dt Î dt ˚ dt
Practice Problem
d n X (t ) d n + m RXY (t )
6.4 If X ( n ) (t ) = then prove that R ( n ) ( m ) (t ) = (-1)m .
dt n + m
n X Y
dt
RELATIONSHIP BETWEEN
TWO RANDOM PROCESSES 6.14
Consider the random processes X(t) and Y(t)
I. They are jointly wide-sense stationary if X(t) and Y(t) are both WSS and their cross-correlation
RXY(t, t + t) and RYX(t, t + t) depend only on the time difference t.
RXY (t , t + t ) = RXY (t ) RYX (t , t + t ) = RYX (t )
II. The two random processes are uncorrelated if their cross-correlation is equal to the product of their
mean functions
RXY (t , t + t ) = RXY (t ) = E[ X (t )]E[Y (t + t )]
(6.48)
= E[ X (t )]E[Y (t )]
or Two random processes are uncorrelated if their covariance CXY (t, t + t) = 0 (6.49)
III. Two random processes are orthogonal if RXY (t, t + t) = 0
That is, if the sampling time is equal to Ts then the discrete-time random process is given by
X(n) = X(nTs)
6.10 Given the random process X(t) = A sin (wt + q) where A and w are constants and q is uniformly
distributed random variable between –p and p. Define a new random process Y(t) = X2(t). (a) Are X(t) and
Y(t) WSS? (b) Are X(t) and Y(t) jointly WSS.
Solution Given: X(t) = A sin (wt + q)
The new random process Y(t) = X2(t) = A2 sin2 (wt + q)
E[X(t)] = E[A sin (wt + q)]
Since A is a constant, we can write
E[X(t)] = AE[sin(wt + q)]
6.24 Probability Theory and Random Processes
1
=- [cos(w t + p ) - cos(w t - p )] = 0
2p
RXX(t, t + t) = E[ X (t ) X (t + t )]
= E[ A sin(w t + q ) A sin(w (t + t ) + q )]
A2
= E[cos wt - cos(2w t + wt + 2q )]
2
A2 A2
= E[cos wt ] - E[ cos(2w t + wt + 2q )]
2 2
Since q is a uniform distributed random variable,
E[cos(2 wt + wt + 2q)] = 0
Also, w is a constant. Therefore,
A2
RXX(t, t + t) = cos wt
2
\ X(t) is WSS
A2
E[Y(t)] = E[ A2 sin 2 (w t + q )] = E[1 - cos(2w t + 2q )]
2
A2 A2
= - E[cos(2w t + 2q )]
2 2
•
E[cos(2 w t + 2q)] = Ú cos(2w t + 2q ) fQ (q ) dq
-•
p
1 1 p
=
2p Ú cos(2w t + 2q ) dq = - 4p sin(2w t + 2q ) -p
-p
1
=- [sin(2w t + 2p ) - sin(2w t - 2p )]
4p
=0
Random Processes—Temporal Characteristics 6.25
A2
fi E[Y(t)] = , a constant
2
RYY(t, t + t) = E[Y (t ) Y (t + t )] = E[ A2 sin 2 (w t + q ) A2 sin 2 (w (t + t ) + q )]
A4
= E[(cos(wt ) - cos(2w t + wt + 2q ))2 ]
4
A4
= E[cos2 wt + cos2 (2w t + wt + 2q ) - 2 cos(wt ) cos(2w t + wt + 2q )]
4
A4 A4 A4
= cos2 wt + E[cos2 (2w t + wt + 2q )] - 2 E[cos wt cos(2w t + wt + 2q )]
4 4 4
A4 A4 È 1 - cos(4w t + 2wt + 4q ) ˘ A4
= cos2 wt + E ˙ - 2 cos(wt ) E[cos(2w t + wt + 2q )]
4 4 ÍÎ 2 ˚
Since q ~ U (-p , p )
E[cos(2w t + wt + 2q )] = 0
and E[cos(4w t + 2wt + 4q )] = 0
A4 A4 1
fi RYY(t, t + t) = cos2 wt + .
4 4 2
A4 È 2 1˘
= Ícos wt + 2 ˙
4 Î ˚
fi Y(t) is also WSS.
6.11 A random process Y(t) = X(t) – X(t + t) is defined in terms of a process X(t) that is at least wide-
sense stationary.
(a) Show that mean value of Y(t) is zero even if X(t) has a non-zero mean value.
(b) Show that s Y2 = 2[ RXX (0) - RXX (t )] .
(c) If Y(t) = X(t) + X(t + t), find E[Y(t)] and sY2.
= E[Y 2 (t )] \ E[Y (t )] = 0
= E[ X 2 (t ) + X 2 (t + t ) - 2 X (t ) X (t + t )]
= E[ X 2 (t )] + E[ X 2 (t + t ) - 2 E[ X (t ) X (t + t )]
•
E[X2(t)] = Úx
2
f X ( x, t ) dx
-•
•
E[X2(t + t)] = Úx
2
f X ( x, t + t ) dx
-•
•
= Úx
2
f X ( x, t ) dx = E[ X 2 (t )]
-•
fi s Y2 = 2 E[ X 2 (t )] - 2 RXX (t , t + t )
Since X(t) is a wide-sense stationary process,
RXX(t, t + t) = RXX(t)
Also, E[X2 (t)] = RXX(0)
Hence, sY2 = 2RXX(0) – 2RXX(t)
= 2[RXX(0) – RXX(t)]
(c) If Y(t) = X(t) + X(t + t)
E[Y(t)] = E[X(t)] + E[X(t + t)] = 2E[X(t)]
If E[X(t)] = 0 E[Y(t)] = 0
If E[X(t)] is non-zero with a constant C then
E[Y(t)] = 2C
sY2 = E[Y 2 (t )] - {E[Y (t )]}2
= E[(X(t) + X(t + t))2] – (2C)2
= E[X2(t)] + E[X2(t + t)] + 2E[X(t) X(t + t)] – 4C2
We know
E[X2(t)] = E[X2(t + t)] = RXX(0)
Random Processes—Temporal Characteristics 6.27
6.12 A random variable process X(t) = sin (w t + f) where f is a random variable uniformly distributed
in the interval (0, 2p).
cos wt
Prove that cov[t , t + t ] = RXX (t , t + t ) = .
2
Solution
Ï 1
Ô 0 £ f £ 2p
Given: fF (f ) = Ì 2p
Ô0 otherwise
Ó
RXX(t, t + t) = E[X(t) X(t + t)]
= E[sin (wt + f) sin (w(t + t) + f)]
1
= E[cos w t - cos(2w t + wt + 2f )]
2
1 1
= cos w t - E[cos(2w t + wt + 2f )]
2 2
2p
1
E[cos(2w t + wt + 2f )] =
2p Ú cos(2w t + wt + 2f ) df =0
0
1
RXX (t , t + t ) = cos wt = RXX (t )
2
2p
1
E[X(t)] = E[sin(w t + f )] =
2p Ú sin(w t + f ) df = 0
0
2p
1
Similarly, E[ X (t + t )] =
2p Ú sin(w t + wt +f )df = 0
0
cov(t , t + t ) = RXX (t , t + t ) - E[ X (t )] E[ X (t + t )]
1
= RXX (t , t + t ) = cos wt
2
6.13 A dice is tossed and corresponding to the dots S= {1, 2, 3, 4, 5, 6}, a random process X(t) is formed
with the following time functions
X(1, t) = –3; X(2, t) = 1; X(3, t) = 1 – t; X(4, t) = 1 + t; X(5, t) = 2 – t; X(6, t) = t – 2
Find mX(t), E[X2], s2X (t), RX(t1, t2), CX(t1, t2).
6.28 Probability Theory and Random Processes
1
Solution The probability for the events is same and equal to .
6
1 6
mX(t) = E[ X (t )] = Â pi Xi (t ) = Â X i (t )
i 6 i =1
1
= [ -3 + 1 + 1 - t + 1 + t + 2 - t + t - 2]
6
=0
Since mean is zero,
1 6 2
s X2 (t ) = E[ X 2 (t )] = Â X (t )
6 i =1 i
1
= [(–3)2+ 1 + (1 – t)2 + (1 + t)2 + (2 – t)2 + (t – 2)2]
6
1
= [9 + 1 + 1 + t2– 2t+ 1 + t2 + 2t + 4 + t2– 4t + t2 – 4t + 4]
6
1
= [20 + 4t2 – 8t]
6
RXX(t1, t2) = E[X(t1) X(t2)]
1
= [(–3) (–3) + (1) (1) + (1 – t1) (1 – t2) + (1 + t1) (1 + t2) + (2 – t1) (2 – t2) + (t1 – 2) (t2 – 2)]
6
1
= [20 - 4t1 - 4t2 + 4t1t2 ]
6
CXX(t1, t2) = RXX(t1, t2)
6.14 A random process consists of three sample functions X(s1, t1) = 1; X(s2, t1) = 2 sin t and X(t3, t) = 3
cos t, each occurring with equal probability. Is the process stationary in any sense?
1 1 1
= d ( x - x1 ) + d ( x - x2 ) + d ( x - x3 )
3 3 3
Random Processes—Temporal Characteristics 6.29
• •
E[ X (t )] = Ú x f X ( x) dx Ú x d ( x - x1 ) dx = x1
-• -•
•
È1 1 1 ˘
= Ú x ÍÎ 3 d ( x - x1 ) + 3 d ( x - x2 ) + 3 d ( x - x3 )˙˚ dx
-•
= 1 x1 + 1 x2 + 1 x3
3 3 3
= 1 + 1 2 sin t + 1 (3cos t )
3 3 3
1 2
= + sin t + cos t
3 3
Since the mean value is time dependent, X(t) is not stationary in any sense.
6.15 A random process is defined by X(t) = A, where A is a continuous random variable uniformly
distributed on (0, 1). Find the mean and autocorrelation of the process.
Solution Given: X(t) = A; A ~ U(0, 1)
Therefore,
•
E[X(t)] = E[ A] =
Úaf
-•
A ( a ) da
1 1
a2 1 1
=
Ú
0
a da =
2
0
= =
2 2
1
mX =
2
RXX(t, t + t) = E[ X (t ) X (t + t )]
= E[ A . A] = E[ A2 ]
• 1 1
a3 1
E[A2] =
Ú
-•
Ú
a 2 f A (a ) da = a 2 da =
0
3
0
=
3
6.16 For the random process described in the above problem, find-first order and second-order density
functions.
f X ( x2 ; t2 x1 ; t1 )
f X ( x1 , x2 ; t1 , t2 ) =
f X ( x1 ; t1 )
= f A ( x1 ) d ( x2 - x1 )
6.17 Given a random process X(t) = Ae–at, where a is constant and A ~ U(0, 1), find the mean and
autocorrelation of X(t).
Solution
(a) Given: X(t) = Ae–at where a is constant . Also, E[A] = 0 and sA2 = 1. Hence, E[A2] = 1
mX(t) = E[X(t)] = E[Ae–at]
b + a 1+ 0
E[A] = = = 0.5
2 2
Therefore, mX(t) = 0.5 e–at
(b) Autocorrelation
RXX (t , t + t ) = E[ X (t ) X (t + t )]
= E[ Ae - a t Ae - a (t +t ) ]
= E[ A2 e -2 at - at ] = E[ A2 ] e -2 at - at = 1.e -2 at - at
= e - (2 at + at )
6.18 Find the mean, autocorrelation, variance and autocovariance of the random process X(t) = tU where
U ~ U(0, 1).
2
Solution Given X(t) = tU . Also, E[U] = 0 and s U = 1. Hence, E[U2] = 1
•
The mean mX(t) =
Ú t U f (u) du
-•
f(u) = 1 for 0 £ u £ 1
= 0 otherwise
1 1
u2 1
Ú
mX(t) = tUdu = t
0
2
0
=
2
t
• 1 1
u3 t2
E[ X (t )] =
2
Ú
-•
Ú
t 2 U 2 f (u) du = t 2 U 2 du = t 2
0
3
0
=
3
s X2 = E[ X 2 (t )] - [ m X (t )]2
2
t2 Ê t ˆ t2 t2 t2
= -Á ˜ = - =
3 Ë 2¯ 3 4 12
Autocorrelation
RXX (t , t + t ) = E[ X (t ) X (t + t )]
Random Processes—Temporal Characteristics 6.31
= E[t U (t + t ) U ]
= E[U 2 ] E[t (t + t )]
= t(t + t)
C XX (t , t + t ) = RXX (t , t + t ) - m X (t ) m X (t + t )
1 (t + t )
= t (t + t ) - t
2 2
3
= t (t + t )
4
6.19 A stochastic process is described by X(t) = A sin t + B cos t where A and B are independent random
variables with zero mean and equal standard deviation. Show that the process is stationary of the second
order.
Solution Given: X(t) = A sin t + B cos t
And also
E[A] = E[B] = 0 sA2 = sB2 fi E[A2] = E[B2]
E[X(t)] = E[A sin t + B cos t]
= E[A] sin t + E[B] cos t
=0
RXX(t, t + t) = E[X(t) X(t + t)] = E[(A sin t + B cos t) (A sin (t + t) + B cos (t + t))]
6.20 Consider a random process X(t) defined by X(t) = U cos t + (V + 1) sin t, where U and V are
independent random variables for which E[U] = E[V] = 0, E[U2] = E[V2] = 1
(a) Find the autocovariance function of X(t).
(b) Is X(t) wide sense stationary? Explain your answer.
Solution Given: X(t) = U cos t + (V + 1) sin t
Also, E[U] = E[V] = 0 and E[U2] = E[V2] = 1
Since U and V are independent,
E[UV] = E[U] E[V] = 0
E[X(t)] = E[U cos t + (V + 1) sin t]
6.32 Probability Theory and Random Processes
Practice Problems
6.5 Consider a random process X(t) = B cos (50 t + f) where B and f are independent random variables. B is a
random variable with zero mean and unit variance. f is uniformly distributed in the interval (–p, p). Find the mean and
autocorrelation of the process.
6.6 Given a stationary random process X(t) = 10 cos (100 t + q) where q Œ (–p, p) follows uniform distribution. Find
the mean and autocorrelation functions of the process.
Solved Problems
Solution
(a) Given: Y1(t) = X(t) cos (w0t)
Y2(t) = Y(t) cos (w0t + q)
Since X(t) and Y(t) are jointly wide-sense stationary,
RXY (t , t + t ) = RXY (t )
= RY1Y2 (t , t + t ) = E[ Y1 (t ) Y2 (t + t )]
= E[ X (t ) cos w 0 (t ) Y (t + t ) cos(w 0 t + w 0t + q )]
Random Processes—Temporal Characteristics 6.33
RXY (t ) ÏÔ ¸Ô
• •
=
2 Ô Ú
Ì cos(w 0t + q ) fQ (q ) dq + Ú cos(2w t + w t + q ) f
0 0 Q (q ) dq ˝
Ô˛
Ó -• -•
Ï q2 q2 ¸
RXY (t ) Ô 1 1 Ô
= Ì
2 Ô q 2 - q1 Ú
cos(w 0t + q ) dq +
q 2 - q1 Ú
cos(2w 0 t + w 0t + q ) dq ˝
Ô˛
Ó q1 q1
RXY (t ) q q
= {sin (w 0t + q ) q2 + sin (2w 0 t + w 0t + q ) q2 }
2(q 2 - q1 ) 1 1
RXY (t )
= {sin (w 0t + q 2 ) - sin(w 0t + q1 ) + sin((2w 0 t + w 0t + q 2 ) - sin((2w 0 t + w 0t + q1 )}
2(q 2 - q1 )
Ê (q - q ) ˆ
when sin Á 2 1 ˜ = 0
Ë 2 ¯
Ê (q 2 - q1 ) ˆ
fi ÁË ˜¯ = p fi q 2 - q1 = 2p
2
That is, q ~ U(0, 2p) then Y1(t) and Y2(t) are orthogonal.
6.34 Probability Theory and Random Processes
6.22 Consider a random process Y(t) = X(t) cos (w0t + q) where X(t) is second-order stationary, and q is
a random variable independent of X(t) and uniform on (0, 2p). The above process is applied to a square-law
device and produces an output W(t).
Find (a) E[W(t)], (b) RWW(t, t + t), and (c) whether or not W(t) is wide-sense stationary.
E[W (t )] = E[ X 2 (t )] E[cos2 (w 0 t + q )]
2p (2p - 0)2 p2
E[ X (t )] = = p ; s X2 = =
2 12 3
p2 4p 2
E[ X 2 (t )] = s X2 + {E[ X ]}2 = +p2 =
3 3
4p 2
E[W (t )] = E[cos2 (w 0 t + q )]
3
•
4p 2
=
3
-•
Ú cos (w t + q ) f
2
0 Q (q ) dq
2p
4p 2 Ê 1 ˆ
= Ú cos (w t + q ) dq
2
3 ÁË 2p ˜¯
0
0
2p
2p Ê 1 + cos 2(w 0 t + q ) ˆ
=
3 Ú ÁË
0
2 ˜¯ dq
Ï ¸
Ô 2p Ô
pÔ Ô
3Ô
0
Ú
= Ì[2p ] + cos 2(w 0 t + q ) dq ˝
Ô
ÔÓ 0
Ô˛
2
= 2p
3
RWW (t , t + t ) = E[W (t ) W (t + t )]
= E[ X 2 (t ) cos2 (w 0 t + q ) X 2 (t + t ) cos2 (w 0 t + w 0t + q )]
= E[ X 2 (t ) X 2 (t + t )] E[cos2 (w 0 t + q ) cos2 (w 0 t + w 0t + q )]
Random Processes—Temporal Characteristics 6.35
Consider
E[cos2 (w 0 t + q ) cos2 (w 0 t + w 0t + q )]
Ï È 1 + cos 2(w 0 t + q ) ˘ È 1 + cos 2(w 0 t + w 0t + q ) ˘ ¸
= E ÌÍ ˙ Í ˙˝
ÓÎ 2 ˚ Î 2 ˚˛
1 E[cos2(w0t + q)]
= {1 + E[cos 2(w 0 t + q ) cos 2(w 0 t + w 0t + q )]}
4 = E[cos2(w0t + w0t + q)] = 0
1È 1 ˘
= Í1 + E[cos 2w 0t + (cos 4w 0 t + 2w 0t + 2q )˙
4Î 2 ˚
1È 1 ˘
= 1 + cos 2 w 0t ˙ Since E[cos 4w 0 t + 2 w 0t + 2q ] = 0
4 ÍÎ 2 ˚
1È 1 ˘
= 1 + (2 cos2 w 0t - 1)˙
4 ÍÎ 2 ˚
1 È1 ˘
= + cos2 w 0t ˙
4 ÍÎ 2 ˚
Ï1 Ê 1 ˆ¸
fi RWW (t , t + t ) = E[ X 2 (t ) X 2 (t + t )] Ì Á + cos2 w 0t ˜ ˝
Ó 4 Ë 2 ¯˛
1 Ï1 ¸
= E[ X 2 (t ) X 2 (t + t )] Ì + cos2 w 0t ˝
4 Ó2 ˛
Since RWW (t, t + t) depends on t , the process W(t) is WSS.
Solved Problem
6.23 If X(t) is a stationary random process having a mean value E[X(t)] = 3 and autocorrelation function
2
RXX(t) = 9 + e–2(t), find the mean value and the variance of the random variable Y =
Ú X (t ) dt.
0
2
Solution
Ú
Given: Y = X (t ) dt
0
È2 ˘ 2
Ú Ú
E[Y] = E Í X (t ) dt ˙ = E[ X (t )]
Í ˙
Î0 ˚ 0
6.36 Probability Theory and Random Processes
Ú
2
= 3 dt = 3t 0 = 6
0
È2 2 ˘
2
E[Y ] = E Í
Í
X ( t )Údt Ú
X (u)du ˙
˙
Î0 0 ˚
2 2 2 2
= ÚÚ
0 0
E[X (t ) X (u)] du dt = ÚÚ R
0 0
XX [t - u] dt du
2 2 2 2
ÚÚ ÚÚ e
- t -u - t -u
= 9 + 2e dt du = 36 + 2 dt du
0 0 0 0
ÏÔ 2 t 2 u ¸Ô
ÚÚ
= 36 + 2 Ì e - (t - u ) du dt +
ÔÓ 0 0
ÚÚ
0 0
et - u dt du ˝
Ô˛
-2 -2
= 36 + 2 (1 + e +e )
–2
= 40 + 4e
s Y2 = E[Y 2 ] - {E[Y ]}2
= 40 + 4e–2 – (6)2
= 4[1 + e–2]
Practice Problems
6.7 A random process is given by Z(t) = At3 + B where A and B are independent random variables. Find mz(t).
6.8 Let X(t) = A cos wt + B sin wt where A and B are Gaussian random variables with zero mean and variance s 2. Find
the mean and autocorrelation of X(t).
6.9 If X(t) = A sin (wt + q) where A and w are constants and q is a random variable uniformly distributed over (–p, p),
find the autocorrelation function of X2(t).
Ê A2 ˆ
Á Ans : RXX (t1, t2 ) = [2 + cos2w (t1 - t2 )]˜
Ë 8 ¯
In some other random processes, the mean and autocorrelation functions of every sample function of the
ensemble is same as the ensemble. This implies that it is possible to determine the statistical behaviour of
the ensemble from a sample function. In both cases, statistical behaviour of the ensemble can be obtained
from only one sample function. These processes where the time average and statistical average are equal are
known as ergodic processes. Thus, for an ergodic process, time average and statistical average are equal.
That is, mx = mX and Rxx(t) = RXX(t).
where the operator A denote times average in a manner analogous to E for the statistical average.
The time average function of the sample function x(t) is defined as
T
1
m x = x = A[ x(t )] = lim
T Æ• 2T Ú x(t ) dt
-T
(6.62)
For any random process X(t), the values x (t) and Rxx(t) are random variables. If we consider all sample
functions then the ensemble average mX can be obtained by finding expectation of x .
È 1
T ˘
mX = X = E[ x ] = E Í lim
Í T Æ• 2T Ú x(t )dt ˙
˙
Î -T ˚
T
1 (6.64)
= lim
T Æ• 2T Ú E[ x(t )] dt = x
-T
Similarly, the ensemble autocorrelation function is obtained from the expectation of Rxx(t). That is,
RXX (t ) = E[ Rxx (t )] (6.65)
È 1
T ˘
= E Í lim
Í T Æ• 2T Ú x(t ) x(t + t )dt ˙ = Rxx (t )
˙
(6.66)
Î -T ˚
If the variances of x and RXX(t) are made to zero then the ensemble averages are equal to time average.
Then we could write
mx = mX (6.67)
Rxx(t) = RXX(t) (6.68)
6.38 Probability Theory and Random Processes
The ergodic theorem allows the validity of the above two equations which states that “all time averages
are equal to the corresponding statistical average”. The random processes that satisfy the ergodic theorem are
called ergodic processes.
Two random processes are called jointly ergodic if they are individually ergodic and their time cross-
correlation function is equal to the statistical correlation function
T
1
Rxy(t) = lim
T Æ• 2T Ú x (t ) y(t + t )
-T
= RXY(t) (6.69)
Definitions
Ergodic Process A random process is said to be ergodic if all time averages are equal to the corresponding
statistical averages.
Jointly Ergodic Processes Two random processes are jointly ergodic if they are individually ergodic and
also have a time correlation function that equals the statistical cross-correlation function.
T
1
Rxy (t ) =
2T Ú x (t ) y(t + t ) = R
-T
XY (t )
È 1
T ˘
= E Í lim
ÍT Æ• 2T Ú X (t )dt ˙
˙
Î -T ˚
T
1
= lim
T Æ• 2T Ú Xdt = X
-T
(6.71)
Random Processes—Temporal Characteristics 6.39
ÏÈ T ˘
2¸
Ô 1 Ô
= E Ì Í lim
ÔÎÍ T Æ• 2T Ú [ X (t ) - X ]dt ˙
˙
˝
Ô
Ó -T ˚ ˛
È Ê 1 ˆ
2 T T ˘
= E Í lim Á ˜
ÍT Æ• Ë 2T ¯ Î ÚÚ
È X (t ) - X ˘ È X (t1 ) - X ˘ dt dt1 ˙
˚ Î ˚ ˙
Î -T -T ˚
2 T T
Ê 1 ˆ
= lim Á ˜
T Æ• Ë 2T ¯ Ú ÚC
-T -T
XX (t , t1 )dt dt1 (6.77)
T T
1
lim
T Æ• 4T 2 Ú ÚC
- T -T
XX (t , t1 ) dt dt1 =0 (6.78)
The above double integral is evaluated over a region shown in Fig. 6.16. It consists of two vertical lines at
t = T and t = –T. Similarly, t = –T – t and t = T – t are two straight lines with slope equal to –1.
t
2T
t=T–t
T
I2
t
t=T
t = –T I1
–T
t = –T – t
–2T
The total area can be divided into two regions I1 and I2.The area I1 can be obtained by moving the horizontal
strip from bottom to x-axis where tmin = –2T and tmax = 0. At the same time, t varies from (–T – t) to T.
Similarly, for the area I2, t varies from 0 to 2T and t varies from –T to (T – t). Hence we can write
Ï 0 T 2 T T -t
Ô¸
Var(AX) = lim 1 ÔÌ
T Æ• 4T 2
Ú Ú C XX (t ) dtdt + Ú Ú C XX (t )dt dt ˝ (6.80)
ÔÓt =-2T t =-T -t t = 0 t =- T Ô˛
Consider I1,
0 T
I1 = Ú Ú C XX (t )dt dt
t =-2T t =- T -t
0
= Ú
t =-2T
[T - (-T - t )]C XX (t ) dt
0
= Ú (2T + t )C
-2T
XX (t )dt
Consider I2,
2 T T -t
I2 =
Ú ÚC
0 -T
XX (t )dt dt
Random Processes—Temporal Characteristics 6.41
2T
= ÚC
0
XX (t )[T - t + T ]dt
2T
= ÚC
0
XX (t )[2T - t ] dt
1 È ˘
0 2T
Var(AX) =
4T 2 Í -2TÚ Ú
Í (2T + t ) C XX (t ) dt + (2T - t ) C XX (t )dt ˙
˙
Î 0 ˚
2T È Ê ˘
0 2T
t ˆ Ê t ˆ
= Í Á1 +
Ú ˜
4T Í -2T Ë 2T ¯
2
C XX (t ) dt + Á 1-
Ë 2T ¯ ˜ Ú
C XX (t )dt ˙
˙
(6.81)
Î 0 ˚
Since CXX(t) = CXX(–t), we can write
2 È Ê | t |ˆ ˘
2T
Var (Ar) =
2T Í Ë Ú
Í Á1 - ˜
2T ¯
C XX (t ) dt ˙
˙
Î0 ˚
Thus, the sufficient condition for mean ergodicity of a process X(t) is
2T
1 È | t |˘
s2Ax = Var(Ax) = lim
T Æ• T Ú ÍÎ1 - 2T ˙˚ C
0
XX (t ) dt =0 (6.82)
A discrete random process is said to be mean ergodic if the statistical average of the sequence and time
average of the samples are equal with probability 1. The condition for mean ergodic is
2N
È |n| ˘
Â
1
Í1 - 2 N + 1 ˙ C XX (t ) = 0
lim (6.84)
N Æ• 2 N + 1
n = -2 N Î ˚
Let us define a process Y(t) = X(t) X(t + l), where l is a time offset
E[Y(t)] = E[ X (t ) X (t + l )] = RXX (l ) (6.86)
and RYY(t) = E[Y (t )Y (t + t )]
= E[ X (t ) X (t + l ) X (t + t ) X (t + t + l )] (6.87)
6.42 Probability Theory and Random Processes
REVIEW QUESTIONS
25. Define ensemble average and time average of a random process.
26. State mean ergodic theorem.
27. When is the process said to be ergodic in mean?
28. When is a random process said to be correlation ergodic?
Solved Problems
Solution
2T
1 Ê t ˆ
Var(AX) = lim
T Æ• 2T
-2T
Ú ÁË1 - 2T ˜¯ R XX (t ) dt
2T
1 Ê t ˆ
= lim
T Æ• 2T
-2T
Ú ÁË1 - 2T ˜¯ 4 (1 - t ) dt
1 ÏÔ 0 Ê t ˆ
2T
Ê t ˆ ¸Ô
= lim
T Æ• 2T
Ì Ú 4 Á1 +
Ë ˜
2T ¯
(1 + t ) dt + Á Ú
1-
Ë 2T ¯ ˜ 4 (1 - t ) dt ˝
ÓÔ -2T 0 ˛Ô
1 ÏÔ 0 1 ¸Ô
= lim Ú + + Ú
(1 - t )2 dt ˝
2
Ì 4 (1 t ) dt 4
T Æ• 2T
ÔÓ -1 0 Ô˛
Random Processes—Temporal Characteristics 6.43
1 ÏÔ 0 1 ¸Ô
= lim Ú Ú
Ì4 (1 + t + 2t ) dt + 4 (1 + t - 2t )dt ˝
2 2
T Æ• 2T
ÓÔ -1 0 ˛Ô
Ï 0
ˆ ¸Ô
1
1 Ô Ê t3 ˆ Ê t3
= lim Ì4 Á t + +t2˜ + 4 Át + -t2˜ ˝
T Æ• 2T
ÔÓ Ë 3 ¯ -1
Ë 3 ¯0 Ô
˛
1 Ï Ê 1 ˆ È 1 ˘¸
= lim Ì4 Á 1 + - 1˜ + 4 Í1 + - 1˙ ˝
T Æ• 2T Ó Ë 3 ¯ Î 3 ˚˛
1 Ê 8ˆ
= lim ÁË 3 ˜¯ = 0
T Æ• 2T
6.25 X(t) = A + B sin (w0 t + f) where B, A and f are independent random variables. f ~ U(0, 2 p), A and
B are uniformly distributed over (0, 1) and (0, 2) respectively. Check whether X(t) is mean ergodic.
Solution Given: X(t) = A + B sin (wt + f)
mX = E[X(t)] = E [A] + E [B sin (wt + f)]
= E[A] + E [B] E[sin (wt + f)]
Since f ~ U(0, 2p)
E[sin (wt + f)] = 0
fi mX = E[A] = 0
RXX(t, t + t) = E[X(t) X(t + t)]
= E[{A + B sin w.t + f}] {A + B sin (wt + wt + f)}
1
= s 2A + E[ B2 ]cos wt
2
1 ÏÔ 2T È | t | ˘ ¸Ô
Var(Ax) = lim Ì Ú C XX (t ) Í1 - ˙ dt ˝
T Æ• 2T
ÓÔ -2T Î 2T ˚ ˛Ô
ÔÏ È 2 1 ˘ È | t | ˘ Ô¸
2T
1
= lim Ì Ú Ís A + E[ B ]cos wt ˙ Í1 -
2
˙ dt ˝
T Æ• 2T
ÔÓ -2T Î 2 ˚ Î 2T ˚ Ô˛
1 ÏÔ 0 È 2 1 ˘È t ˘
= lim Ì Ú Ís A + E[ B ]cos wt ˙ Í1 +
2
dt
T Æ• 2T
ÔÓ -2T Î 2 ˚ Î 2T ˙˚
t ˘ Ô¸
2T
È 1 ˘È
+ Ú Ís 2A + E[ B2 ]cos wt ˙ Í1 - dt ˝
0 Î
2 ˚Î 2T ˙˚ Ô˛
1 ÔÏ 2 1 2 sin 2T w 1 sin 2 w T
= lim Ìs A 4T + E[ B ] - s 2AT + E[ B2 ]
T Æ• 2T Ô
Ó 2 w 4T w2
1 sin 2 w T Ô̧
-s 2AT + E[ B2 ] ˝
4T w 2 Ô˛
1 È 1 sin 2 w T ˘
= lim Í2T s A +
2
E[ B2 ] ˙
T Æ• 2T Í
Î 2T w 2 ˚˙
È 1 sin 2 w T ˘
= lim Ís 2A + 2 E[ B2 ] ˙
T Æ• Í
Î 4T w 2 ˚˙
È 1 sin 2 w T ˘
= lim Ís 2A + E[ B2 ] 2 2 ˙
T Æ• Í 4 w T ˚˙
Î
= sA2
Therefore, X(t) is not mean-ergodic
ÊN ˆ
6.26 Let N(t) be a zero mean wide-sense stationary noise process for which RNN (t ) = Á 0 ˜ d (t ) where
Ë 2 ¯
N0 > 0 is a finite constant. Determine if N(t) is mean ergodic.
Solution Given: mN = 0. Therefore, CNN(t) = RNN(t).
The process N(t) is mean ergodic if the variance s2A = 0 ; where An is time average of the process W(t).
n
Random Processes—Temporal Characteristics 6.45
1
2T
Ê t ˆ
s 2A = lim Ú Á 1 - 2T ˜ C NN (t ) dt
-2T Ë ¯
n T Æ• 2T
1
2T
Ê t ˆ N0
= lim Ú Á 1 - 2T ˜ 2 d (t ) dt
-2T Ë ¯
T Æ• 2T
1 È 2T N 2T
t N 0 d (t ) ˘
= lim Í Ú 0 d (t ) dt - Ú dt ˙
T Æ• 2T ÍÎ -2T 2 -2T
2 ˙˚
Practice Problems
Solved Problems
6.27 A random process is defined by X(t) = X0 + vt where X0 and v are statistically independent
random variables, uniformly distributed on intervals [X01 X02] and [v1 v2] respectively. Find (a) Mean (b)
Autocorrelation (c) Autocovariance function of X(t) (d) Is X(t) stationary in any sense?
Solution Given: X(t) = X0 + vt
The mean mX(t) = E[X(t)] = E[X0 + vt]
Since X0 and v are statically independent, mX(t) = E[X0] + E[vt]
Both X0 and v are uniformly distributed random variables. From the given data, we find
Ï 1
Ô for X 01 £ X 0 £ X 02
f X0 ( x0 ) = Ì X 02 - X 01
Ô0 otherwise
Ó
6.46 Probability Theory and Random Processes
1
fv ( v ) = for v1 £ v £ v2
v2 - v1
• X 02
1
E[ X 0 ] = Ú x 0 f x0 ( x 0 ) dx 0 =
X 02 - X 01 Ú x 0 dx 0
-• X 01
X 02
1 x02 X 02 + X 01
= =
X 02 - X 01 2 2
X 01
v1 + v2
Similarly, E[ v] =
2
X 02 + X 01 Êv +v ˆ
mX(t) = +Á 1 2 ˜ t
2 Ë 2 ¯
The autocorrelation function of X(t) is
RXX (t , t + t ) = E[ X (t ) X (t + t )]
= E[ X 0 2 + X 0 v(2t + t ) + n 2 t 2 + n 2 tt ]
•
E[ X 02 ] = Ú x0
2
f x0 ( x0 )dx0
-•
X 02
1 1 È X 3 - X 01
3 ˘
=
X 02 - X 01 Ú x0 2 dx0 =
3( X 02 - X 01 ) Î 02 ˚
X 01
X 022 + X 021 + X 01 X 02
=
3
• n2
1 È n 23 - n13 ˘
E[V 2 ] = Ún fV (n )dn = Ún dn =
2 2
Í ˙
-• n1
3 ÎÍ n 2 - n1 ˚˙
È n 2 + n12 + n1n 2 ˘
E[V 2 ] = Í 2 ˙
ÍÎ 3 ˙˚
Ê X 0 + X 01 ˆ Ê n 2 + n1 ˆ
E[ X 0V ] = E[ X 0 ]E[V ] = Á 2 ˜ ÁË 2 ˜¯
Ë 2 ¯
X 022 + X 021 + X 01 X 02 Ê X 0 + X 01 ˆ Ê n 2 + n1 ˆ
= + (2t + t ) Á 2 ˜ ÁË 2 ˜¯
3 Ë 2 ¯
Ê n 2 + n12 + n 2n1 ˆ
+ (t 2 + tt ) Á 2 ˜
Ë 3 ¯
C XX (t ) = RXX (t ) - m X (t )m X (t + t )
6.28 Statistically independent zero mean random process X (t ) and Y (t) have auto correlation functions
RXX (t ) = e -|t | and RYY (t ) = cos(2pt ) respectively.
(a) Find the autocorrelation of the sum W1 (t ) = X (t ) + Y (t )
(b) Find the auto correlation function of the difference W2 (t ) = X (t ) - Y (t )
(c) Find the cross-correlation function of W1(t) and W2(t)
1
Solution Since all sample functions are equally likely, their probability pX ( x ) =
4
Therefore,
4
1 4
E[ X ] = Â X (si , t ) pX ( x ) = Â X (si , t )
i =1 4 i =1
1
=
4
(cos t - cos t + sin t - sin t )
=0
RXX (t , t + t ) = E [X (t ) X (t + t )]
1 4
= Â X (si , t ) X (si , t + t )
4 i =1
1
=
4
[cos t cos(t + t ) + cos t cos(t + t ) + sin t sin(t + t ) + sin t sin(t + t )]
1
= [cos t cos(t + t ) + sin t sin(t + t )]
2
1
= cos t
2
∵ [cos t cos(t + t ) + sin t sin(t + t )]= cos(t + t - t )
which is a function of time difference. The process X(t) is a WSS process.
Practice Problem
6.12 A random process X(t) is characterized by four sample functions
X (s1, t ) = -1; X (s2 , t ) = -2;
X (s3 , t ) = 3 X (s4 , t ) = t
which are equally likely. Check whether the process is a WSS process. (Ans. Not a WSS process)
Random Processes—Temporal Characteristics 6.49
Solved Problems
6.30 Consider a WSS random process X(t) with zero mean and autocorrelation function RXX(t) = e–2|t|.
This random process is modulating the carrier wave cos (w t + f) where f is uniformly distributed in the
interval (0, 2p). The resulting process is given by Y(t) = X(t) cos (w t + f). The carrier wave is independent
of X(t). Find the mean, variance and autocorrelation of Y(t). Find whether the process is WSS.
Solution Given: Y(t) = X(t) cos (wt + f)
mY(t) = E[Y(t)] = E[X(t) cos (wt + f)]
Since X(t) and cos (wt + f) are independent, we can write
mY(t) = E[X(t)] E[cos(wt + f)]
Given that E[X(t)] = 0
•
Also, we can find E[cos(w t + f )] = Ú cos(w t + f ) f (f )df
-•
1
f (f ) = for 0 £ f £ 2p
2p
= 0 otherwise
2p
1
E[cos(w t + f )] =
2p Ú cos(w t + f ) f (f )df = 0
0
mY (t) = 0
The variance
s Y2 = E[Y 2 (t )] - {E[Y (t )]}2
= E[Y 2 (t )]
E[Y 2 (t )] = E[ X 2 (t ) cos2 (w t + f )]
= E[ X 2 (t )] E[cos2 (w t + f )]
E[ X 2 (t )] = RXX (0) = 1
È 1 + cos(2w t + f ) ˘
E[cos2 (w t + f )] = E Í ˙
Î 2 ˚
1 1 1
= + E[cos(2w t + f )] =
2 2 2
1
s X2 =
2
RYY(t) = E[{Y (t )Y (t + t )]
= E[{X (t ) X (t + t ) cos(w t + f ) cos(w t + wt + f )]
1
= E[ X (t ) X (t + t )] E[cos w t + cos(2w t + wt + 2f )]}
2
6.50 Probability Theory and Random Processes
RXX (t ) ÏÔ ¸Ô
= Ì E (cos w t ) + E[ cos(2w t + wt + 2f )]˝
2 Ô Ô˛
Ó 0
1
RYY (t ) = RXX (t ) cos wt
2
1 -2 t
= e cos wt
2
6.31 The process X(t) is WSS and normal with E[X(t)] = 0 and RXX (t ) = 4e -2|t | .
Find E{[ X (t + 1) - X (t - 1)] }.
2
Solution The random variable is a normal random variable with zero mean. The autocorrelation function
is RXX (t ) = 4 e -2|t |
We have RXX (0) = E[ X 2 (t )] = 4
E[X(t)] = 0
Similarly, E[Y(t)] = 0
The autocorrelation function of the random process X(t) is
RXX (t , t + t ) = E[ X (t ) X (t + t )]
= E[ A cos(w 0 t + q ) A cos(w 0 (t + t ) + q )]
È A2 ˘
= E Í (cos(2w 0 t + w 0t + q ) + cos w 0t )˙
ÎÍ 2 ˚˙
2p
1
E[cos(2w 0 t + w 0t + q )] =
2p Ú cos(2w 0 t + w 0t + q )dq
0
1 2p
= [sin(2w 0 t + w 0t + q )] 0
2p
1
= [sin(2w 0 t + w 0t + 2p ) - sin(2w 0 t + w 0t )
2p
=0
È A2 ˘ A2 A2
EÍ cos w 0t ˙ = E[cos w 0t ] = cos w 0t
ÍÎ 2 ˙˚ 2 2
A2
fi RXX (t , t + t ) = cos w 0t
2
Similarly,
RXX (t , t + t ) = E[Y (t )Y (t + t )]
If w 0 π w1 , the autocorrelation function is a function of t and t, Therefore, X(t) and Y(t) are not jointly
wide-sense stationary.
AB
If w 0 = w1 , RXY = E[cos w1t ] which is only a function of t. Thus, the given X(t) and Y(t) are jointly
2
wide-sense stationary.
6.33 A random process X(t) is described by X(t) = A where X(t) is a continuous random variable uniformly
distributed over (0,1). Classify the process.
Solution Given X(t) = A is a random variable uniformly distributed over (0,1). Therefore,
f(A) = 1 for 0 £ A £ 1
= 0 otherwise
For uniform distribution,
1
fx(x) = ;a£ x£b
b-a
=0 otherwise
b + a 1+ 0
mA = = = 0.5
2 2
(b - a )2 (1 - 0)2 1
and sA2 = = =
12 12 12
The autocorrelation function
RXX (t , t + t ) = E[ X (t ) X (t + t )]
= E[ A. A] = E[ A2 ]
• 1
1
È A3 ˘ 1
E[ A2 ] = Ú = Ú A dA = ÍÍ 3 ˙˙ = 3
2 2
A f ( A) dA
-• 0 Î ˚0
Random Processes—Temporal Characteristics 6.53
Since the autocorrelation function is not a function of t, X(t) is a stationary random process.
6.34 A random process X(t) = At, where A is a continuous random variable uniformly distributed in
(0,1). Find (a) E[X(t)], (b) RXX(t, t + t) (c) Is the process stationary in any sense?
• 1 1
A3 1
E[ A2 ] = Ú A2 f ( A)dA = Ú A2 dA =
3
=
3
-• 0 0
1
fi RXX (t , t + t ) =
t (t + t )
3
Since mean and autocorrelation functions are function of t, the random process X(t) is not a stationary
random process.
6.35 Compute the variance of the random process X(t) whose autocorrelation function is given by
2
RXX (t ) = 16 + .
1 + 2t 2
2
= lim 16 + = 16
t Æ• 1 + 2t 2
fi mX = 4
Also, we know
E[ X 2 (t )] = RXX (0)
= 16 + 2 = 18
6.54 Probability Theory and Random Processes
-t
Solution Given: mX(t) = 6 and RXX (t , t + t ) = 36 + 25e
(a) For a first-order stationary process, the mean value is constant. For the given process X(t), the mean
value is 6. Therefore, X(t) is first order stationary. Hence, it is true.
(b) The average power of a random process is
P = RXX(0)
-t
Given RXX (t , t + t ) = RXX (t ) = 36 + 25e
-t
P = [36 + 25e ]t = 0 = 61W
The average power of X(t) is 61W. Hence, it is true.
(c) From the properties of autocorrelation, we know if mX(t) π 0 and X(t) is ergodic with no periodic
components. Then
lim RXX (t ) = ( m X )2
t Æ•
-t
RXX (t , t + t ) = 36 + 25e
RXX (0) = E[ X 2 (t )]
s X2 = RXx (0) - ( m X )2
-t
RXX (t ) = 36 + 25e
RXX (0) = 61
s X2 = 61 - (6)2 = 25
Random Processes—Temporal Characteristics 6.55
6.37 Given X (t ) = A cos w 0 t + B sin w 0 t , where A and B are random variables and w0 is a constant.
Show that X(t) is WSS if A and B are uncorrelated zero-mean random variables having different density
functions but the same variance s2.
If A and B are uncorrelated with zero mean and variance s2, E[ A] = E[ B] = 0 and E[ A2 ] = E[ B2 ] = s 2 .
Also E[AB] = 0.
Since the autocorrelation function depends only on t, the process x(t) is wide-sense stationary.
6.38 A random process X(t) is defined by X(t) = A cos t + B sin t – • < t < • where A and B are
independent random variables each of which has a value –1 with a probability 2/3 and a value 2 with
probability 1/3. Show that X(t) is a wide-sense stationary process.
Solution From the given data the probability mass function of random variable A and B are shown in
Fig. 6.17.
Fig. 6.17
= (2/3) + (4/3) = 2
Since A and B are independent, E[AB] = E[A] E[B] = 0
RXX (t , t + t ) = E[ X (t ) X (t + t )]
= E[( A cos t + B sin t )( A cos(t + t ) + B sin(t + t ))]
= E[ A2 cos t cos(t + t ) + AB sin t cos(t + t ) + AB cos t sin(t + t ) + B2 sin t sin(t + t )]
= E[ A2 ]cos t cos(t + t ) + E[ AB][cos(t + t )sin t + cos t sin(t + t )] + E[ B2 ]sin t sin(t + t )
= 2[cos t cos(t + t ) + sin t sin(t + t )]
= 2 cos(t - (t + t )) = 2 cos t
Hence the process is wide-sense stationary.
Practice Problem
Ê pˆ
Repeat the Solved Problem 6.25 with A ~ U(1, 2), B ~ U(0, 1) and q ∼ U Á 0, .
2 ˜¯
6.13
Ë
Solved Problem
6.39 Consider a random process X (t ) = A sin w t + B cos w t –• < t < •, where w is constant and A and
B are random variables.
Find the condition for X(t) to be stationary.
Show that X(t) is WSS if and only if A and B are uncorrelated with equal variance.
-4 t
6.40 A stationary random process X(t) has autocorrelation RXX (t ) = 16 + 4 cos(2t ) + 5e . Find (a) the
average power of its dc component, ac components, non-dc components and non-periodic components, (b)
average power of X(t) and (c) the variance of X(t).
-4 t
Solution Given: RXX (t ) = 16 + 4 cos(2t ) + 5e
The autocorrelation is composed of periodic and non-periodic components.
-4 t
The periodic component is 4 cos (2t ) and the non-periodic component is 16 + 5e
-4 t
Let RXX (t ) = R pc (t ) + Rnc (t ). Then R pc (t ) = 4 cos 2t and Rnc (t ) = 16 + 5e
The periodic component R pc (t ) = 4 cos 2t . The average power is R pc (0) = 4
The average power of the non-periodic component is Rnc (0) = 16 + 5 = 21
For the non-periodic component,
lim Rnc (t ) = m X2
t Æ•
-4 t
mX = ± lim Rnc (t ) = ± lim (16 + 5e ) = ±4
t Æ• t Æ•
The power of non-periodic component
lim Rnc (t ) = m X2 = 16
t Æ•
The average power
E[ X 2 ] = RXX (0) = 16 + 4 + 5 = 25
The variance
s X2 = E[ X 2 ] - ( m X )2 = 25 - (4)2 = 9
which is the average power of the non-dc components.
6.41 For the following random processes X(t) = A1 cos (w1t + q) and Y (t ) = A2 cos(w 2 t + f ) , find the
cross-correlation function if (a) q π f, and (b) q = f. The values of A1, A2, w1 and w2 are constants. The
random variables q and f are independent and uniformly distributed over (–p, p).
•
E[cos(w1t + q )] = Ú cos(w1t + q ) f (q ) dq
-•
1
q is uniformly distributed over (–p, p). Therefore, f (q ) = for –p £ q £ p
2p
p
1
Now E[cos(w1t + q )] =
2p Ú cos(w1t + q )dq
-p
1 p
= [sin(w1t + q )] -p = 0
2p
Since E[cos(w1t + q )] = 0 , RXY = 0
If q = f
then RXY = E[ A1 cos(w1t + q ) A2 cos(w 2 (t + t ) + q )]
A1 A2
Rxy = E{cos[(w1 + w 2 )t + w 2t + 2q ] + cos[(w 2 - w1 )t + w 2t ]}
2
Since w1, w2 are constants,
A1 A2
RXY = cos[(w 2 - w1 )t + w 2t ] + E{cos[(w1 + w 2 )t + w 2t + 2q ]}
2
0
AA
RXY = 1 2 cos[(w 2 - w1 )t + w 2t ]
2
6.42 Show that if a random process X(t) is strict-sense stationary then it is also wide-sense stationary.
Solution Given: X(t) is strict-sense stationary. It satisfies the condition that the firstorder and second-
order density functions are invariant for a time shift. That is,
f X ( x, t ) = f X ( x, t + t )
Then we have the mean value
m X (t ) = m X (t + t )
And the auto-correlation function
RXX (t1 , t2 ) = RXX (t1 + t , t2 + t )
That is, the auto-correlation function depends on time points t1 and t2 only through the difference t1 - t2 .
Thus, X(t) is WSS.
6.43 Consider two random processes X (t ) = A cos w t + B sin w t and Y (t ) = B cos w t - A sin w t where A
and B are uncorrelated, zero mean random variables with same variance and w is a constant. Show that X(t)
and Y(t) are joint stationary.
E[ A2 ] = s 2A + {E[ A]}2 = s 2A
E[ B2 ] = s B2 + {E[ B]}2 = s B2
Also, E[ A2 ] = E[ B2 ]
fi RXY (t , t + t ) = E[ A2 ]{cos(w t + wt )sin w t - cos w t sin(w t + wt )}
= - E[ A2 ]sin wt
Since RXY (t , t + t ) is independent of time t, X(t) and Y(t) are a jointly stationary process.
-0.1 t1 - t2
6.44 Let X(t) be random process with mean 5 and autocorrelation RXX (t1 , t2 ) = 25 + 4e Determine
the mean, variance and covariance of the random variables Z = X(5) and w =X(9).
E ÈÎ Z 2 ˘˚ = E ÈÎW 2 ˘˚ = E ÈÎ X 2 ˘˚ = RXX (0 ) = 29
Cov( Z , W ) = R( Z , W ) - E ( Z )E (W )
= R(9,5) - E[ X (5)]E[ X (9)]
= 25 + 4e -0.1(9 - 5) - (5)(5)
= 4e -0.4
6.45 If a random process X(t) = sin (wt + y) where y is a uniformly distributed random variable in the
1
interval (0, 2p). Prove that Cov(t1, t2) = cos w (t1 - t2 ) .
2
Solution Given: X(t) = sin(wt + y) and y is uniformly distributed in the interval (0, 2p)
Therefore,
Ï 1
for 0 £ y £ 2p
fY(y) = ÔÌ 2p
Ô 0 otherwise
Ó
•
E[X(t)] = E[sin(w t + y)] = Ú sin(w t + y) fY ( y)dy
-•
2p
1
=
2p Ú sin (w t + y) dy = 0
0
1
= E Ècos w (t1 - t2 )- cos (w t1 + w t2 + 2 y )˘˚
2 Î
1 1
= E Ècos w (t1 - t2 )˘˚ - E ÈÎcos (w t1 + w t2 + 2 y )˘˚
2 Î 2
•
E ÈÎcos (w t1 + w t2 + 2 y )˘˚ = Ú cos (w t1 + w t2 + 2 y ) fY (y ) dy
-•
2p
1
=
2p Ú cos (w t1 + w t2 + 2 y ) dy = 0
0
fi 1
RXX (t1 , t2 ) = cos w (t1 - t2 )
2
Random Processes—Temporal Characteristics 6.61
6.46 A complex random process Z(t) = X(t) + jY(t) is defined by jointly stationary real processes X(t)
and Y(t). Show that
E{| Z (t )|2 } = RXX (0) + RYY (0) .
Solution
E{| Z * (t )|2 } = E ÈÎ Z * (t )Z (t )˘˚
= E{[ X (t ) - jY (t )][ X (t ) + jY (t )]}
the real processes X1 (t ), X 2 (t ), Y1 (t ) and Y2 (t ). Find the expression for the crosscorrelation function of
Z1 (t )and Z 2 (t ) if
(a) All the real processes are correlated.
(b) They are uncorrelated.
6.48 Consider two zero-mean jointly wide-sense stationary random processes X(t) and Y(t) with s X = 4
2
and s Y = 12 . Explain why each of the following functions cannot apply to the processes if they have no
2
periodic components.
-t
(a) RXX (t ) = e -3t u(t ) (b) RXX (t ) = - sin(3t )e
2
(c) RXY (t ) = 9(2 + 2t 2 )-1 (d) RYY (t ) = 3 È sin(2t ) ˘
Í 2t ˙
Î ˚
(e) RYY (t ) = 5 + 3 È sin(5t ) ˘
Í 5t ˙
Î ˚
6.49 Consider a random process Y (t ) = X (t ) cos(w 0 t + q ) where X(t) is a wide-sense stationary random
process, q is a random variable independent of X(t) and is distributed uniformly in (–p, p) and w0 is a
constant. Prove that Y(t) is wide-sense stationary.
Random Processes—Temporal Characteristics 6.63
mX È
= sin (w 0 t + q ) p-p ˘
2p Î ˚
mX
= Èsin (p + w 0 t )+ sin (-p + w 0 t )˘˚
2p Î
mX
= È- sin w 0 t + sin w 0 t ˘˚ = 0
2p Î
RYY (t , t + t ) = E ÈÎY (t )Y (t + t )˘˚
RXX (t )
= E ÈÎcos (w 0t )+ cos (2w 0 t + w 0t + 2q )˘˚
2
RXX (t ) È 1 p
1
p ˘
= Í Ú cos (w 0t ) dq + Ú cos (2w 0 t + w 0t + 2q ) dq ˙˙
2 ÍÎ 2p -p
2p -p ˚
È ˘
Í p ˙
=
RXX (t ) Í
cos w 0t +
1 sin (2w 0 t + w 0 t + 2q ) ˙
2 Í 2p 2 ˙
Í -p ˙
ÎÍ 0 ˚˙
1
= R (t ) cos w 0t
2 XX
6.64 Probability Theory and Random Processes
6.50 A wide-sense stationary random process Y(t) has a power of E[Y2(t) = 4. Give at least one reason
why each of the following expressions cannot be its autocorrelation function.
RYY (t , t + t ) = -4e
-t
(a)
4t
(b) RYY (t , t + t ) =
1 + 2t 2 + 2t 4
RYY (t , t + t ) = 6e
(c) -t 2 - t
cos2 (5t )
(e) RYY (t , t + t ) =
2 + cos 4t
RYY(t, t) = 6 which is not equal to 4 as given in the problem. Therefore, the above expression is not a valid
ACF.
sin[4(t - 2)]
(d) Given RYY (t , t + t ) = 4
4(t - 2)
The above function attains maximum value at t = 2 and not at t = 0. Hence, it is a not a valid ACF.
cos2 (5t )
(e) Given RYY (t , t + t ) =
2 + cos 4t
ACF cannot be function of t for a wide-sense stationary random process. Hence, the above expression is
not a valid ACF.
6.51 Consider a random process X (t ) = A sin w t + B cos w t; - • < t < • ; where w is a constant and A
and B are random variables
(a) Find the condition for X(t) to be stationary.
(b) Show that X(t) is WSS if and only if A and B are correlated with equal variance.
Random Processes—Temporal Characteristics 6.65
If E [A]= E [B ]= constant then mX(t) depends on time t. Therefore, for mX(t) to be independent of t,
E [A]= E [B ]= 0 .
(b) The autocorrelation function
RXX (t , t + t ) = E ÈÎ X (t )X (t + t )˘˚
( )
= E ÈÎ(A sin w t + B cos w t ) A sin w (t + t )+ B cos w (t + t ) ˘˚
+ E ÈÎ B2 ˘˚ cos w t cos w (t + t )
If E ÈÎ A2 ˘˚ = E ÈÎ B2 ˘˚ = s 2 , then
= s 2 cos wt + E[ AB]{sin(2w t + wt )}
The above autocorrelaion function will be a of function of t only if E[AB] = 0. That is, A and B are
uncorrelated. Therefore, X(t) is WSS if A and B are uncorrelated with equal variance.
6.52 Consider a random process X(t) = A cos pt where A is a Gaussian random variable with zero mean
and variance s 2A . Is X(t) stationary in any sense?
È 1 + cos 2p t ˘
E ÈÎ X 2 (t )˘˚ = E ÈÎ A2 cos2 p t ˘˚ = E[ A2 ]E Í ˙
Î 2 ˚
È 1 + cos 2p t ˘ 2 È 1 + cos 2p t ˘
= E[ A2 ] Í ˙ = sA Í ˙
Î 2 ˚ Î 2 ˚
Since E[X2(t)] depends on time, X(t)is not wide-sense stationary.
6.66 Probability Theory and Random Processes
6.53 Consider two random processes X1 (t ) = p1 (t + e ) and X 2 (t ) = p2 (t + e ) where p1(t) and p2(t)
are both periodic waveforms with period T. e is a uniform random variable on the interval (0, T). Find
RX1 X2 (t , t + t ) .
Solution Given: X1 (t ) = p1 (t + e )
X 2 (t ) = p2 (t + e ) and e ~ U (0, T )
RX1 X2 (t , t + t ) = E[ X1 (t ) X 2 (t + t )]
= E[ p1 (t + e ) p2 (t + e + t )]
1
fe (e ) = for 0 £ t £ T
T
= 0 otherwise
•
RX1 X2 (t , t + t ) = Ú p1 (t + e ) p2 (t + e + t ) fe (e )de
-•
T
1
T Ú0 1
= p (t + e ) p2 (t + e + t ) de
Let t + e = n fi dn = de
Then at e = 0; n = t
At e = T ; n = t + T
t +T
1
T Út 1
RX1 X2 (t , t + t ) = p (n ) p2 (n + t )dn
Since both p1(t) and p2(t) are periodic the above expression can be written as
T
1
RX1 X2 (t , t + t ) =
T Ú p1 (n ) p2 (n + t ) dn = RX X 1 2
(t )
0
6.54 Let X(t) be the sum of a deterministic signal x(t) and a wide-sense stationary noise process N(t).
Find the mean value, autocorrelation and autocovariance of X(t). Discuss the stationarily of X(t).
6.55 Find the autocorrelation function of a random process with periodic sample function
Ê 2p t ˆ
p(t ) = A sin 2 Á
Ë T ˜¯
where A and T > 0 are constants.
T
1 Ê 2pn ˆ Ê 2p (n + t ) ˆ
T Ú0
= A sin 2 Á A sin 2 Á ˜¯ dn
Ë T ˜¯ Ë T
ÈÏ Ê 4pn ˆ ¸ Ï Ê 4p (n + t ) ˆ ¸˘
ÍÔ 1 - cos Á
2 T ˜ Ô Ô 1 - cos Á ˜¯ Ô˝˙
A ÍÌ Ë T ¯ ˝Ì Ë T ˙
T Ú0 ÍÎÔÓ
= Ô˛ ÔÓ Ô˛˙ dn
2 2 ˚
A2 ÈT 4pn 4p (n + t ) 4pn 4p (n + t ) ˘
= Í Ú [1 - cos - cos + cos cos ˙ dn
4T ÍÎ 0 T T T T ˙˚
1
s
s2
s3
s4
0 1 2 3 4 n
Fig. 6.18
The CDF of Xn is
FXn(x) = P(Xn £ x) = P(sn £ x) = P(s £ x1/n)
Since s is a uniform random variable between 0 and 1, we get
P(s £ x1/n) = x1/n
If x ~ U(0, 1)
x 1 1
Fx(x)= Ú f x ( x )dx = Ú f x ( x )dx = Ú dx = x
-• 0 0
That is, for a uniform random variable
Fx(x) = x
6.57 For the Solved Problem 6.56, find mean and autocovariance of Xn. Check whether the process is
mean ergodic.
• 1
s n +1
1
1
E[Xn] = Ú s n f X n ( x ) dx = Ú s n ds =
n +1
=
n +1
-• 0 0
1
E[XnXn + k] = E[snsn+k] = E[s2n+k] =
2n + k + 1
CX(n, n + k) = E[XnXn + k] – E[Xn] E[Xn + k]
1 1 1
= - ◊
2n + k + 1 n + 1 n + k + 1
Since CX(n, n + k) is a function of n, Xn is not WSS.
Therefore, Xn is not mean ergodic.
Solution
(a) Consider two random processes X(t) and Y(t).
If X(t) and Y(t) are orthogonal random processes, then
E[X(t1), Y(t2)] = 0
If X(t) and Y(t) are uncorrelated then
Cov[X(t1), Y(t2)] = 0
We have
Cov[X(t1), Y(t2)] = E[{X(t1) – E[X(t1)]}{Y(t2) – E[Y(t2)]}]
If either E[X(t1)] or/and E[Y(t2)] is zero then
Cov[X(t1) Y(t2)] = E[X(t1) Y(t2)] = 0
and hence X(t) and Y(t) are uncorrelated.
Therefore, two orthogonal random processes are uncorrelated when one of the random processes has
zero mean.
(b) For uncorrelated processes,
E[X(t1) Y(t2)] = E[X(t1)] E[Y(t2)]
For orthogonal processes E[X(t1) Y(t2)] = 0
That is, X(t) and Y(t) will be orthogonal if X(t) and/or Y(t) are zero mean.
6.59 Consider a random process Yn = Xn + C(n) where Xn is a zero-mean unit variance random process
and C(n) is deterministic function. (a) Find the mean and variance of Yn. (b) Find the joint CDF of Yn and
Yn + 1. (c) Find the mean and autocovariance function.
Solution
Yn = Xn + C(n)
E[Yn] = E[Xn + C(n)] = E[Xn] + C(n) = C(n)
Var(Xn) = Var[Xn + Cn)] = C2(n) + Var(Xn)
6.70 Probability Theory and Random Processes
= C2(n) + 1
The CDF of Yn is
FYn ( x ) = P(Yn £ x) = P[Xn + C(n) £ x]
Practice Problems
6.14 Consider the random process X(t) = cos (t + f), where f is a random variable with density function
1 p p
ff (f ) = , - < f < check whether the process is wide-sense stationary.
p 2 2
6.15 Examine whether the random process X(t) = A cos (wt + q) is wide-sense stationary if A and w are constants and
q ~ u(0, 2p).
6.16 If X(t) = sin (wt + Y) where Y is uniformly distributed in (0, 2p), show that X(t) is a wide-sense stationary WSS
process.
6.17 Show that the random process X(t) = cos (t + f) where f ~ U(0, 2p) is
(a) first-order stationary
(b) Stationary in wide-sense
(c) Ergodic in mean.
6.18 Find the mean, mean-square value and variance of the process X(t) for which
RXX(t) = 50 e–10|t| + 25 cos (5t) + 10 (Ans. ±3.16, 85, 75)
–2t
6.19 The random process X(t) is stationary with E[X(t)] = 1 and RXX(t) = 1 + e , Find the mean and variance of
1 È 1 -2 ˘
S = Ú X (t ) dt Í Ans. 1, 2 (1 + e )˙
Î ˚
0
Solved Problem
X (t + e ) - X (t )
Solution Let us define X (t ) = lim
e Æ0 e
Random Processes—Temporal Characteristics 6.71
È X (t + e ) - X (t ) ˘
E ÈÎ X (t )˘˚ = E Í lim ˙
Î e Æ 0 e ˚
Assuming the order of limit and expectation operations can be interchanged, we get
Ï E[ X (t + e )] - E[ X (t )] ¸
(a) E[ X (t )] = lim Ì ˝
e Æ0 Ó e ˛
E[X(t + e)] = E[X(t)] = mX
mX - mX
Hence, E ÈÎ X (t )˘˚ = lim =0
e Æ0 e
(b) RXX (t ) = E[ X (t ) X (t + t )]
È Ï X (t + t + e ) - X (t + t ) ¸˘
= E Í X (t ) Ì lim ˝˙
Î Ó e Æ 0 e ˛˚
È Ï X (t ) X (t + t + e ) - X (t ) X (t + t ) ¸˘
= E Í lim Ì ˝˙
Îe Æ 0 Ó e ˛˚
Ï E[ X (t ) X (t + t + e )] - E[ X (t ) X (t + t )] ¸
= Ì lim ˝
Ó e Æ 0 e ˛
Ï R (t + e ) - RXX (t ) ¸
= Ì lim XX ˝
Ó e Æ 0 e ˛
dRXX (t )
=
dt
(c) RXX (t + t ) = E[ X (t ) X (t + t )]
Ï Ê X (t + e ) - X (t ) ˆ ¸
= E Ì lim Á ˜¯ X (t + t ) ˝
Ó e Æ 0 Ë e ˛
ÏÔ È X (t + e ) X (t + t ) - X (t ) X (t + t ) ˘ ¸Ô
= E Ì lim Í ˙˝
ÔÓe Æ0 Î e ˚ Ô˛
Ï E[ X (t + e ) X (t + t )] - E[ X (t ) X (t + t )] ¸
= lim Ì ˝
e Æ0
Ó e ˛
Ï R (t - e ) - RXX (t ) ¸
= lim Ì XX ˝
e Æ0
Ó e ˛
dRXX (t ) d È dRXX (t ) ˘ d 2 RXX (t )
=- =- Í ˙ =-
dt dt Î dt ˚ dt 2
6.72 Probability Theory and Random Processes
In other words
P ÈÎ X (t + Dt )- X (t ) ≥ 2 ˘˚ = O (Dt )
(6.94)
(e) Probability that no event in the interval Dt is
p0 (Dt ) = P ÈÎ X (t + Dt )- X (t ) = 0 ˘˚ = 1 - lDt + O (Dt ) (6.95)
Consider the expression
•
 pk (Dt ) = p0 (Dt )+ p1 (Dt )+ p2 (Dt )+ =1
(6.96)
k =0
(
= P ÈÎ X (t ) = 0 and X (t + Dt )- X (t ) = 0 ˘˚ )
6.74 Probability Theory and Random Processes
p0 (t ) p0 (Dt )
= p0 (t )[1 - lDt ] (6.98)
p0 (t + Dt )- p0 (t ) = - l p0 (t )Dt (6.99)
From which we can write
p0 (t + Dt ) - p0 (t )
= - l p0 (t ) (6.100)
Dt
p0¢ (t )
= -l
p0 (t )
Integrating on both sides,
ln p0 (t ) = - l t + c
If t = 0 p0 (0) = 0 fi c = 0
p0 (t ) = e - l t (6.101)
pk (t ) for k ≥ 1 can be obtained by considering
pk (t + Dt ) = P [X (t + Dt ) = k ] (6.102)
= P [k occurrences in the interval (0, t) and zero occurrences in the interval (t, t +Dt)]
+ P[k – 1 occurrences in the interval (0, t) and one occurrence in the interval (t, t + Dt)]
= P ÈÎ Xi (t ) = k; Xi ( Dt ) = 0 ˘˚ + P ÈÎ Xi (t ) = k - 1, Xi ( Dt ) = 1˘˚ (6.103)
Using the assumption 1, we can write
pk (t + Dt ) = P ÈÎ Xi (t ) = k ˘˚ P ÈÎ Xi (Dt ) = 0 ˘˚ + P ÈÎ Xi (t ) = k - 1˘˚ P ÈÎ Xi (Dt ) = 1˘˚
e l t p1¢ (t ) + l e l t p1 (t ) = l
d È lt
e p1 (t )˘˚ = l
dt Î
Integrating on both sides, we get
e l t p1 (t ) = l t + c1
At t = 0; p1 (0) = 0 fi c1 = 0
Therefore,
p1 (t ) = l te - l t (6.108)
For k = 2
p2¢ (t )+ l p2 (t ) = l p1 (t ) = l 2 te - l t
e l t p2¢ (t ) + l e l t p2 (t ) = l 2 t
d È lt
e p2 (t )˘˚ = l 2 t
dt Î
Integrating on both sides, we get
l 2t2
e l t p2 (t ) = + c2
2
For t = 0; p2 (t ) = 0 fi c2 = 0
l 2 t 2 - lt l 2 t 2 - lt
fi p2 (t ) =
e = e (6.109)
2 2!
Similarly, we can prove that
( l t )k - l t
pk (t ) =e ; k = 0,1, , 2 (6.110)
k!
The probability density of the number of occurrences
•
( l t )k e - l t
fX ( x) = Â k!
d (x - k) (6.111)
k =0
= (l t )e - l t Â
•
(l t )k -1 = l t e- lt ÈÍ1 + l t + (l t )2 +º˘˙
( )
k =1 (k - 1)! Í 1!
Î
2! ˙
˚
= l t e - l t ÈÎe l t ˘˚ = l t
fi E ÈÎ X (t )˘˚ = l t (6.116)
e - l t (l t )
• k
= Â ÈÎk (k - 1)+ k ˘˚
k =1 k!
•
k (k - 1)e - l t (l t )k • ke - l t (l t )k
=Â +Â
k =1 k! k =1 k!
= e- lt Â
•
(l t )k + l t
k = 2 (k - 2 )!
•
( l t )k - 2
= e - l t  (l t )
2
+ lt
k =2 (k - 2)!
Random Processes—Temporal Characteristics 6.77
= e - l t (l t )
( )
• l t k -2
 (k - 2)! + l t
2
k =2
= e - l t (l t ) el t + l t = (l t )[l t + 1]
2
{
Var[ X (t )] = E ÈÎ X 2 (t )˘˚ - E [X (t )] }
2
= l t [l t + 1]- (l t ) = l t
2
(6.117)
{ }
= E ÈÎ X (t1 ) X (t1 ) + (X (t2 ) - X (t1 )) ˘˚
{
= E ÈÎ X 2 (t1 ˘˚ + E ÈÎ X (t1 ) X (t2 )- X (t1 ) ˘˚}
Using assumption 1 and assuming that t1 < t2 we get
{ }
= E ÈÎ X 2 (t1 )˘˚ + E ÈÎ X (t1 )˘˚ E ÈÎ X (t2 )˘˚ - E ÈÎ X (t1 )˘˚
= (l t1 ) + l t1 + l t1 ÎÈl t2 - l t1 ˚˘
2
= l 2 t1t2 + l t1 (6.118)
The autocovariance function is given by
If t2 < t1 then X(t2) represents the number of events in the interval (0, t2) and X(t1) represents the number
of events in (0, t1).
X(t1) – X(t2) is the number of events in the interval (t2, t1).
The autocorrelation function is
{ (
= E È X (t2 ) X (t2 )+ X (t1 )- X (t2 ) ˘
Î ˚ )}
{
= E ÈÎ X 2 (t2 )˘˚ + E ÈÎ X (t2 ) X (t1 )- X (t2 ) ˘˚ }
6.78 Probability Theory and Random Processes
{
= E ÈÎ X 2 (t2 )˘˚ + E ÈÎ X (t2 )˘˚ E ÈÎ X (t1 )- X (t2 ) ˘˚}
= (l t2 ) + l t2 + l t2 ÈÎl t1 - l t2 ˘˚
2
= l 2 t2 t1 + l t2 (6.120)
l t1 t1
r = = (6.124)
2
l t1t2 t2
and for t1 > t2,
l t2 t2
r = = (6.125)
2
l t2 t1 t1
Practice Problems
= l t2 (1 + l t1 ); t1 > t2
6.21 Determine the autocovariance of the Poisson process.
6.22 Prove that Poisson process is not a stationary process.
{
E ÈÎ X 2 (t )˘˚ = E È X1 (t )+ X 2 (t ) ˘ }
2
ÍÎ ˙˚
= E ÈÎ X12 (t )+ X 2 2 (t )+ 2 X1 (t )X 2 (t )˘˚
= (l1t + l2 t ) + (l1 + l2 ) t
2
= (l1 + l2 ) t 2 + (l1 + l2 ) t
2
= (l1 + l2 ) t (6.127)
From the expressions for mX(t) and Var[X(t)] we can find X(t) is a Poisson process.
2. Difference of two independent Poisson processes is not a Poisson process
Let X(t) = X1(t) – X2(t)
We know E ÈÎ X1 (t )˘˚ = l1t and E ÈÎ X 2 (t )˘˚ = l2 t
X (t ) = X1 (t )- X 2 (t )
= (l1 - l2 ) t
{
E ÈÎ X 2 (t )˘˚ = E È X1 (t )- X 2 (t ) ˘ }
2
ÍÎ ˙˚
= E ÈÎ X12 (t )+ X 22 (t )- 2 X1 (t )X 2 (t )˘˚
Since the underlined parameter is not (l1 – l2), the process X(t) is not a Poisson process.
3. The Poisson process is not a stationary process
The mean of a Poisson process X(t) with parameter l is giving by
E[X(t)] = lt (6.129)
Since the mean is a function of time, the Poisson process is not a stationary process.
4. The interval between two successive occurrences of Poisson process with parameter l follows
an exponential distribution with mean 1/l.
Let T be the time at which the first event occurs. Since T is continuous random variable the CDF of T is
FT(t) = P(T £ t)
= 1 – P(T > t)
= 1 – P[no event occurs in (0, t)]
= 1 – P[X(t) = 0]
e - l t (l t )
0
P ÈÎ X (t ) = 0 ˘˚ = p0 (t ) = = e- lt
0!
fi FT(t) = 1 – e–lt
REVIEW QUESTIONS
29. If the process {X(t); t > 0} is a Poisson process with parameter l, obtain P[X(t) = n]. Is the process
first order stationary?
30. State the postulates of a Poisson process and derive the probability distribution. Also prove that the
sum of two independent Poisson processes is a Poisson process.
31. Derive Poisson process with rate l and hence mean. Is Poisson process stationary? Explain.
32. Derive expression for autocorrelation of a Poisson process.
33. Prove that the interval between two successive occurrences of Poisson process with parameter l
1
follows an exponential distribution with mean .
l
Solved Problems
6.61 The customers arrive at a bank according to Poisson process with mean rate 5 per minute. Find the
probability that during a 1-minute interval, no customer arrives.
Random Processes—Temporal Characteristics 6.81
pk (t ) = P ÈÎ X (t ) = k ˘˚ =
(l t )k e- lt
k!
10
Ê 25 ˆ -25/3
e
È Ê 1ˆ ˘ ËÁ 3 ¯˜
p Í X Á ˜ = 10 ˙ = = 0.1064
Î Ë 6¯ ˚ 10!
6.63 Telephone calls are initiated through an exchange at the average rate of 75 per minute and are
described by a Poisson process. Find the probability that more than 3 calls are initiated in any 5-second
period.
6.64 A machine goes out of order, whenever a component fails. The failure of this part follows a Poisson
process with a mean rate of 1 per week. Find the probability that 2 weeks have elapsed since the last failure.
If there are 5 spare parts of this component in an inventory and that the next supply is not due in 10 weeks,
find the probability that the machine will not be out of order in the next 10 weeks.
Solution Given: l = 1 ; t = 2
fi lt = 2
P[2 weeks have elapsed since last failure]
e -2 (2)0
= P[ X (2) = 0] = = e -2 = 0.135
0!
Since there are only 5 spare parts, with these parts, the machine can fail 5 numbers of times. Hence, we
can write
e -10 (10 )
5 k
P ÈÎ X (10 ) £ 5˘˚ = Â
k =0 k!
6.65 Aircraft arrive at an airport according to a Poisson process at a rate of 12 per hour. All aircraft are
handled by one air-traffic controller. If the controller takes a 2-minute break, what is the probability that he/
she will miss one or more arriving aircraft?
2
lt = = 0.4
5
The probability that he will miss one or more arriving crafts.
= 1 – p (he miss zero aircraft)
= 1-
(l t )0 e- lt
0!
–0.4
=1–e = 0.3297
Random Processes—Temporal Characteristics 6.83
Practice Problems
6.23 In a village road, buses cross a particular place at a Poisson rate 4 per hour. If a boy starts counting at 9.00 a.m.,
(a) What is the probability that his count is 1 by 9.30 a.m.
(b) What is the probability that his count is 3 by11.00 a.m. (Ans. (i) 0.3088, (ii) 0.522)
2
6.24 Let X(t) a Poisson process with arrival rate l. Find E{[X(t) – X(s)] } for t > s. (Ans. (t – s)2 l2 + (t – s)
6.25 Find the first-order characteristic function of a Poisson process (Ans: e–l(1 – e–jw)
6.26 A radioactive source emits particles at the rate of 10 per minute in accordance with Poisson process. Each particle
emitted has a probability of 1/3 of being rewarded. Find he probability that at least 6 particles are recorded in a 6-minute
period.
Solved Problems
6.66 Find the average and time autocorrelation function of a random process X (t ) = A cos(w 0 t + q )
where A and w0 are constants and q ~ U(0, 2p).
T
A2 1 Ï cos(w 0t ) + cos(2w 0 t + w 0t + 2q ) ¸
= lim
2 T Æ• T Ú ÌÓ 2
˝ dt
˛
-T
Ï ¸
A2 Ô T T Ô
Ô 1 1 Ô
=
2
Ì lim
T Æ• 2T
Ú (cos w 0t )dt + Tlim
Æ• 2T
Ú cos(2w 0 t + w 0t + 2q )dq ˝
Ô -T -T Ô
ÔÓ 0 Ô˛
T
A2 1
=
2
cos(w 0t ){ lim
T Æ• 2T
Ú dt}
-T
A2
= cos(w 0t )
2
A2
RXX (t ) = cos(w 0t )
2
6.67 Determine whether the random process X (t ) = A cos(w 0 t + q ) is wide-sense stationary or not,
where A, w0 are constants and q is a uniformly distributed random variable on the interval (0, 2p).
b+a
Solution For a uniformly distributed random variable on the interval (a, b), the mean is and the
2
( b - a )2
variance is .
12
Since q is a uniformly distributed random variable on the interval
1
f (q ) = for 0 £ q £ 2p
2p
=0 otherwise
(2p - 0)2 p 2
E[q] = p and s q = =
2
12 3
A process X(t) is said to be wide-sense stationary if E[X(t)] = constant and E[ X (t ) X (t + t )] = RXX (t )
Random Processes—Temporal Characteristics 6.85
•
E[X(t)] = E[ A cos(w 0 t + q )] = Ú A cos(w 0 t + q ) f (q )dq
-•
2p 2p
1 A
=
2p Ú A cos(w 0 t + q ) dq =
2p
sin(w 0 t + q )
0
0
A
= [sin(w 0 t + 2p ) - sin w 0 t ] = 0
2p
E[ A cos(w 0 t + q )] = 0
The auto-correlation function of a random process is given by
RXX (t , t + t ) = E[ A cos(w 0 t + q ) A cos(w 0 t + w 0t + q )]
1
= E[ A2 cos(2(w 0 t + q ) + w 0t ) + A2 cos(w 0t )]
2
A2
=
2
{E[cos(2w 0 t + q ) + w 0t ] + E[cos w 0t ]}
2p
1
E[cos(2w 0 t + q + w 0t )] = Ú cos(2w 0 t + q + w 0t ) 2p dq
0
1 2p
= [sin(2w 0 t + q + w 0t )] 0
2p
1
= [sin(2w 0 t + 2p + w 0t ) - sin(2w 0 t + w 0t )] = 0
2p
A2
RXX (t , t + t ) = E[cos w 0t ]
2
fi E[cos w0t] = cos w0t
2
RXX (t, t + t) = A cos (w 0t )
2
Since autocorrelation function depends on the shift t, the random process X(t) is wide-sense stationary.
6.68 A random process is defined by X(t) = A sin wt, t ≥ 0, where w is a constant . The amplitude A is
uniformly distributed between 0 and 1. Determine the following: (a) E[X(t)] (b) The autocorrelation of X(t)
(c) The autocovariance function of X(t).
Solution A is uniformly distributed between 0 and 1 .
0 +1
Therefore E[ A] = = 0.5
2
(a) The mean of X(t) is given by
E[X(t)] = E[A sin wt] = E[A] sin wt = 0.5 sin wt [∵ w is a constant]
E[X(t)] = 0.5 sin wt
6.86 Probability Theory and Random Processes
•
E[cos(2w 0 t + w 0t + 2q )] = Ú cos(2w 0 t + w 0t + 2q ) f (q )dq
-•
Ï 1
Ô for 0 £ q £ 2p
f (q ) = Ì 2p
Ô 0 otherwise
Ó
2p
1
E[cos(2w 0 t + w 0t + 2q )] =
2p Ú cos(2w 0 t + w 0t + 2q )dq =0
0
1
RXX (t , t + t ) = cos wt
6
6.70 Find the autocorrelation function of a random process X (t ) = A cos(w 0 t + q ) , where w0, A and q
and q are mutually independent random variables distributed over (0,1),(0,1) and (0, p) respectively.
•
E[cos w 0t ] = Ú cos(w 0t ) f (w 0 )dw 0
-•
1
1
= Ú cos(w 0t )dw 0 = sin w 0t 0
= sin t
0
2
1 Ê 1ˆ 4 1
E[ A2 ] = s 2A + {E[ A]} =
2
+ = =
12 ÁË 2 ˜¯ 12 3
• p p
1 1 È sin 2q ˘
E[cos 2q ] = Ú cos 2q f (q )dq =
p Ú0
cos 2q dq = Í
p Î 2 ˙˚ 0
=0
-•
• p p
1 -1 È cos 2q ˘
E[sin 2q ] = Ú sin 2q f (q )dq = p Ú sin 2qdq = p ÍÎ 2 ˙˚ 0
=0
-• 0
1 Ê 1ˆ 1
RXX (t , t + t ) = sin t = sin t
2 ÁË 3 ˜¯ 6
Practice Problems
Ê pˆ
6.27 Repeat the above problem with w ~ U (1,2), A ~ U (0,1) and q ~ U Á 0, ˜
Ë 2¯
6.28 Verify whether the sine-wave process X(t) = A sin (wt + q) where A is uniformly distributed in the interval –1 to 1
is WSS or not. (Ans. not – WSS)
Ï p x =1
pX(Xn) = Ì (6.131)
Ó 1 - p x =0
The mean of Bernoulli process Xn is
E[Xn] = m X n = 1 ◊ P( X n = 1) + 0 ◊ P( X n = 0) = p (6.132)
and
Random Processes—Temporal Characteristics 6.89
Variance is
Var (Xn) = E[Xn]2 – mXn
= (1)2 p + (0)2 (1 – p) – p2 = p(1 – p) (6.133)
The autocorrelation is given by
E[ X n1 X n2 ] = (1)2 ◊ p + 02(1 – p) = p for n1 = n2 (6.134)
E[ X n1 ] E[ X n2 ] = p2 for n1 π n2
A sample function of Bernoulli process is shown in Fig. 6.21
The autocovariance is
= p - p2 = p(1 - p) if n1 = n2 ¸Ô
˝ (6.135)
p2 - p2 = 0 if n1 π n2 Ô˛
The random variable Yn denotes the number of successes in n Bernoulli trials. The pmf of Yn is given by
Ê nˆ
pYn (k ) = Á ˜ p k (1 - p)n - k k = 0, 1, n
Ë k¯
Èn ˘ n n
E[Yn] = E ÍÂ Xi ˙ = Â E[ Xi ] = Â p = np (6.137)
ÍÎ i =1 ˙˚ i = 1 i =1
6.90 Probability Theory and Random Processes
Solved Problems
6.71 A random process is defined by Sn = 2Xn+1 where Xn is a Bernoulli process. Find the mean and
variance of Sn.
Solution Since Xn is a Bernoulli process, it has two outcomes, one with probability p and the other with
probability (1 – p). The mean and variance of Bernoulli process is
E[Xn] = p; Var (Xn) = pq
E[Sn] = 2E[Xn] + 1 = 2p + 1
Var [Sn] = Var[2Xn+1] = 4 Var [Xn]
= 4 pq
6.72 The probability that a student pass in an examination is 0.4. If 10 students have taken the examination,
find the following probabilities
(a) At least 5 pass, (b) Exactly 6 students pass.
Solution Let X denote the number of students who pass the examination. Then X has the binomial
distribution with p = 0.4 and pmf
Ê 10ˆ
pX(x) = Á ˜ (0.4)n (0.6)10 - n n = 0,1 10
Ë n¯
The probability that at least 5 students pass is given by
10
P(X ≥ 5) = Âp
n=5
X ( x)
Practice Problem
6.29 The diodes produced by a certain company will be defective with probability 0.02. The company sells the diodes
in packages of 20 and offers a money-back guarantee that at most 2 of the 50 diodes is defective. What proportional of
packages sold must the company replace?
1 Ï 1 -1 ¸ (6.139)
= n /2
exp Ì- ( X - X )T C XX ( X - X )˝
(2p ) n /2
C XX Ó 2 ˛
where CXX is known as covariance matrix and X is the vector of mean functions of X(tk); X is a vector of
X(tk).
È X (t1 ) ˘ È X (t1 ) ˘
Í ˙ Í ˙
X (t 2 ) ˙ X (t 2 ) ˙
X=Í X=Í (6.140)
Í ˙ Í ˙
Í ˙ Í ˙
ÎÍ X (t n )˚˙ ÎÍ X ( t )
n ˚˙
ÈC XX (t1 , t2 ) C XX (t1 , t2 ) C XX (t1 , t n ) ˘
Í ˙
C (t , t ) C XX (t2 , t2 ) C XX (t2 , t n )˙
CXX = Í XX 2 1
Í ˙
Í ˙
ÎÍC XX (t n , t1 ) C XX (t n , t2 ) … C XX (t n , t n )˚˙
ÈC11 C12 C1n ˘
Í ˙
C C22 C2 n ˙
= Í 21 (6.141)
Í ˙
Í ˙
ÎÍCn1 Cn 2 Cnn ˚˙
If the random variables X(t1), X(t2), …, X(tn) are uncorrelated then
Ïs 2 for i = j
CXX(ti, tj) = ÔÌ i (6.142)
ÔÓ 0 for i π j
6.92 Probability Theory and Random Processes
That is CXX is ~ diagonal matrix with elements in the principal diagonal equal to si2. That is,
Ès 12 0 0 0˘
Í ˙
Í0 s 22 0 0˙
CXX = Í ˙ (6.143)
Í … ˙
Í0 0 0 … s n2 ˙˚
Î
È 1 ˘
Í 2 0 0 0 ˙
Í s1 ˙
Í 1 ˙
-1 Í 0 0 0 ˙
C XX =Í s 22 ˙ (6.144)
Í ˙
Í ˙
Í 1 ˙
Í 0 0 0 … ˙
Î s n2 ˚
Ès 1 0 0 0˘
Í ˙
0 s2 0 0˙
C1/2 =Í (6.145)
XX
Í … ˙
Í ˙
ÎÍ 0 0 0 … s n ˚˙
n
XX = s 1 s 2
C1/2 sn = ’s
i =1
i (6.146)
n
x k - X (t k )
-1
( X - X ) C XX (X - X) = Â
k +1 s k2
(6.147)
f X (t1 ), , X ( tn ) ( x1 , x2 xn )
È n x - X (t ) ˘
Â
1
= exp Í- k k ˙
(6.148)
n
Í k =1 2s 2
˙
(2p ) n /2
’
i =1
si Î k ˚
Solved Problems
Solution
RXX(t) = 9 + e–2|t|
6.74 X(t) is a Gaussian process with mean E[X(t)] = 0 and ACF RX(t) = e–4|t|. Let us define a random
process
A
Y= Ú X (t ) dt
0
where A is a uniformly distributed random variable over 1 and 4 and is independent of the random process
X(t). Determine E[Y], sY2.
1 1 1 1 t=s
Ú ÚR Ú Úe
-4|s - t |
= XX (t - s) dt ds = dt ds 1
0 0 0 0 t>s
Ïe -4 s - t
for s ≥ t
We have e–4|s – t| = ÔÌ
-4 t - s
ÔÓe for t > s s<t
1 1
Ú Úe
-4 s - t
sy2 = dt ds 1
0 0
Fig. 6.22
1 1 1 t
Ú Úe Ú Ú
-4( s - t )
= dt ds + e -4(t - s ) ds dt
0 0 0 s=0
1 s 1 s
e4t
= 2 Ú Ú
s= 0 0
e -4( s - t ) dt ds = 2 Ú
s=0
e -4 s
4
0
ds
1
Ê e4 s - 1ˆ
= 2 Úe
s= 0
-4 s
Á 4 ˜ ds
Ë ¯
1 1
2 1
=
4 Ú
s= 0
(1 - e -4 s ) ds =
2 Ú
(1 - e -4 s ) ds
0
È 1˘
1Í 1 e -4 s ˙ = 1 È1 + 1 (e -4 - 1)˘ = 0.377
= s0 --
2Í -4 ˙ 2 ÍÎ 4 ˙
˚
Î 0˚
6.75 Let X(t) be a zero-mean Gaussian random process with autocovariance function given by
CXX(t1, t2) = 4 e–|t1 – t2|
Find the joint pdf of X(t) and X(t + l).
Solution
1 Ï 1 -1 ¸
f X (t ), X (t + l ) ( x1 , x2 ) = 1/2
exp Ì- ( x - X )T C XX ( x - X )˝
(2p ) 1/2
C XX Ó 2 ˛
1 È4 -4 e -|l | ˘
-1 = Í ˙
C XX
16 - 16 e -2|l | ÎÍ- 4 e
- |l |
4 ˚˙
1
|CXX| =
16(1 - e -2|l | )
Ï 1 -1 ¸
exp Ì- X T C XX X˝
f X (t ), X (t + l ) ( x1 , x2 ) = Ó 2 ˛
1/2
2p C XX
È 4 - 4 e - |l | ˘
Í ˙
-1 Í 16(1 - e -2|l | ) 16(1 - e -2|l |
) ˙ È x1 ˘
X T C XX X = [ x1 x2 ] Í ˙ Íx ˙
- |l |
Í -4 e 4 ˙ Î 2˚
ÍÎ 16(1 - e -2|l | ) 16(1 - e -2|l | ˙
)˚
1
= {4 x12 + 4 x22 - 8 e -2|l | x1 x2 }
16(1 - e -2|l | )
1
f X (t ), X (t + l ) ( x1 , x2 ) = exp{-(4 x12 + 4 x22 - 8 e -2|l | x1 x2 ) 16 (1 - e -2|l | )}
-2|l |
8p 1 - e
Practice Problem
9 sin p t
RXX(t) =
pt
Ê È9 0 0 0 ˘ˆ
Á Í ˙
Á Ans. Í0 9 0 0 ˙˜˜
Determine covariance matrix for random variables X(t), X(t + 2), X(t + 4) and X(t + 6).
Á Í0 0 9 0 ˙˜
Á Í ˙˜
Ë ÎÍ0 0 0 9˚˙¯
6.31 If X(t) is a Gaussian random process, prove that Y(t) = X(t + l) – X(t) is a Gaussian random process.
( l t )k e - l t
P[N(t) = k] = for k = 0, 1 …
k!
•
( l t )2 j - l t
P[N(t) = even] = P[N(t) = 2j] =
j=0
(2 j )! e (6.151)
È ( l t )2 ( l t ) 4 ˘
= e - l t Í1 + + + ˙
ÎÍ 2! 4! ˚˙
È elt + e - lt ˘ 1
˙ = ÈÎ1 + e
= e- lt Í -2 l t ˘ (6.152)
ÍÎ 2 ˙˚ 2 ˚
Similarly,
1
P[X(t) = –1/X(0) = –1] = P[N(t) = even] = (1 + e -2 l t ) (6.153)
2
1
fi P[X(t) = ±1/X(0) = ±1] = (1 + e -2 l t ) (6.154)
2
P[X(t) = 1/X(0) = –1] = P[N(t) = odd]
•
( l t )2 j + 1
= Â (2 j + 1)! e
j=0
- lt
Random Processes—Temporal Characteristics 6.97
È l t (l t )3 ˘
= e- lt Í + + ˙
ÎÍ 1! 3! ˚˙
È elt - e - lt ˘
= e- lt Í ˙
ÎÍ 2 ˚˙
1
= (1 - e -2 l t ) (6.155)
2
Similarly,
1
P[X(t) = –1/X(0) = 1} = P[N(t) = odd] = (1 - e -2 l t ) (6.156)
2
1
fi P[X(t) = ±1 | X(0) = ∓1] = (1 - e -2 l t ) (6.157)
2
P[X(t) = 1] = P[X(t) = 1/X(0) = 1] P[X(0) = 1] + P[X(t) = 1 | X(0) = –1] P[X(0) = –1]
1 1 1 1
= [1 - e -2 l t ] ◊ + [1 - e -2 l t ] ◊
2 2 2 2
1
= (6.158)
2
1
P[X(t) = –1] = 1 – P[X(t) = 1] = (6.159)
2
The mean of X(t) is given by
mX(t) = 1 ◊ P[X(t) = 1] + (–1) P[X(t) = –1]
Ê 1ˆ Ê 1ˆ
= 1 Á ˜ + ( -1) Á ˜ = 0 (6.160)
Ë 2¯ Ë 2¯
Var [X(t)] = E[X2(t)]
= P[X(t) = 1] (1)2 + P[X(t) = –1] (–1)2
1 1
= + =1 (6.161)
2 2
The autocovariance is given by
CXX(t1, t2) = E[X(t1) X(t2)] – E[X(t1)] E[X(t2)]
= E[X(t1) X(t2)] (6.162)
E[X(t1) X(t2)] = E[X(t1) = –1; X(t2) = –1] + E[X(t1) = –1; X(t2) = 1]
+ E[X(t1)) = 1; X(t2) = –1] + E[X(t1) = 1; X(t2) = 1] (6.163)
= (–1) (–1) P[X(t1) = –1; X(t2) = –1]
+ (–1)(1) P[X(t1) = –1; X(t2) = 1]
+ (1) (–1) P[X(t1) = 1; X(t2) = –1]
6.98 Probability Theory and Random Processes
e–2lt
Solved Problems
Fig. 6.25
The probability of X(t) changes from 0 to 1 with each occurrence of an event in Poisson process of rate
l. Find the mean, variance, autocorrelation, autocovariance.
È • l 2 k ( t - t )2 k ˘
= e - l (t2 - t1 ) Í Â
Ík = 0
2
2k !
1 ˙
˙
Î ˚
È l 2 (t2 - t1 )2 ˘
= e - l (t2 - t1 ) Í1 + +º˙
ÎÍ 2! ˙˚
È e l (t2 - t1 ) + e - l (t2 - t1 ) ˘
= e - l (t2 - t1 ) Í ˙
ÎÍ 2 ˚˙
1
=[1 + e -2 l )t2 - t1 ) ]
2
RXX(t1, t2) = P[X(t2) = 1/X(t1) = 1] P[X(t1) = 1]
1 1
= [1 + e -2 l )t2 - t1 ) ] ◊
2 2
1
= [1 + e -2 l )t2 - t1 ) ] if t2 > t1
4
If t2 < t1
1
RXX(t1, t2) = [1 + e -2 l (t1 - t2 ) ]
4
Therefore,
1
RXX(t1, t2) = [1 + e -2 l |t1 - t2 | ]
4
1
fi RXX(t, t + t) = [1 + e -2 l |t | ]
4
Autocovariance
CXX(t, t + t) = RXX(t, t + t) – E[X(t1)] E[X(t2)]
2
1 Ê 1ˆ
= [1 + e -2 l |t | ] - Á ˜
4 Ë 2¯
1 -2 l |t |
= e
4
6.78 Two stationary zero-mean random processes have variance of 25 each and a cross-correlation
2
function RXX (t ) = 25 e - (t - 4) . A new random process Z(t) = X(t) Y(t + tx) is formed. Find the value of tx
for which Z(t) will have largest variable.
ds z2(t ) 2
= 50 {-2 (t x - 4) e - (t x - 4) } = 0
dt x
fi tx = 4
6.102 Probability Theory and Random Processes
REVIEW QUESTIONS
34. Suppose X and Y are two random variables, when do you say that X and Y are (a) orthogonal? (b)
uncorrelated?
35. What is the difference between a random variable and random process?
36. Explain with suitable examples: continuous, discrete and mixed-type random process.
37. Define wide-sense stationary process.
9
38. Find the mean of the stationary process X(t), whose ACF is given by RXX(t) = 16 + .
1 + 16t 2
39. State any two properties of cross-correlation function.
40. Find the variance of stationary process X(t) whose autocorrelation function is given by RXX(t) = 2 +
4 e–2|t|.
41. State the postulates of a Poisson process.
42. Consider the random process X(t) = cos (t + f), where f is a random variable with pdf
1 p p
f(f) = ,- <f <
p 2 2
Check whether or not the process is wide-sense stationary.
43. Prove that the sum of two independent Poisson processes is a Poisson process.
44. Prove that for a WSS process X(t), RXX(t, t + t) is an even function of t.
45. When is a random process said to be mean ergodic?
46. Define cross-correlation function of two random processes X(t) and Y(t). State the properties of
cross-correlation functions.
47. List the properties of auto-correlation functions.
48. Discuss in detail about
(a) First-order stationary random process
(b) Second-order and wide-sense stationary random process
49. Define Poisson process stating the assumptions involved and obtain the probability distribution for
that. Find the autocorrelation function of the process.
50. Show that the semi-telegraph process is not wide-sense stationary. Construct a random telegraph
process which is wide-sense stationary.
51. Find the variance of the stationary process X(t) whose auto-correlation function is given by
RXX(t) = 2 + 4 e–2|t|
EXERCISES
Problems
1. If a random process X(t) is given by X(t) = 10 cos (100t + q) whose q is uniformly distributed over
(–p, p), prove that X(t) is correlation ergodic.
2. Consider a random process X(t) that assumes the values ±1. Suppose that X(0) = ±1 with probability
1/2, and suppose that X(t) then changes polarity with each occurrence of an event in a Poisson
process of rate a. Find the mean variance and auto covariance of X(t).
3. Examine whether the random process X(t) = A cos (wt + q) is a wide-sense stationary if A and w are
constants and q is a uniformly distributed random variable in (0, 2p).
Random Processes—Temporal Characteristics 6.103
4. Assume that the number of messages input to a communication channel in an interval of duration 5
seconds is a Poisson process with mean l = 0.3. Compute.
(a) The probability that exactly 3 messages will arrive during 10 second interval.
(b) The probability that the number of message arrivals in an interval of duration 5 seconds is
between 3 and 7.
5. If the process X(t) is a Poisson process with parameter l, obtain P[X(t) = n]. Is the process first-order
stationary?
6. Prove that a random telegraph signal process Y(t) = a X(t) is a wide-sense stationary process when a
is a random variable which is independent of X(t), assumes values –1 and +1 with equal probability
-2 l |t1 - t2 |
and RXX(t1, t2) = e .
7. If X(t) and Y(t) are two random processes with autocorrelation function RXX(t) and RYY(t) respectively
then prove that RXX (t ) £ RXX (0) RYY (0) .
8. The process X(t) whose probability distribution under certain conditions is given by
Ï (a t )n -1
Ô n = 1, 2
Ô (1 + a t )n + 1
P{X (t ) = n} = Ì
Ô at
n=0
ÔÓ 1 + a t
Find the means and variance of the process. Is the process is first order stationary?
9. Show that the random process X(t) = A cos (wt + q) is wide-sense stationary, if A and w are constants
and q is a uniformly distributed in (0, 2p).
10. A stationary random process has an autocorrelation function of
Ï Ê t ˆ
Ô10 1 - ; t < 0.5
RXX(t) = Ì ÁË 0.05 ˜¯
Ô
Ó 0 elsewhere
Determine the mean and variance of the process X(t).
11. Consider random variables Y1 and Y2 related to arbitrary random variables X and Y by the coordinate
rotation.
Y1 = X cos q + Y sin q; Y2 = –X sin q + Y cos q
(a) Find the covariance of Y1 and Y2
(b) For what value of q is the random variables Y1 and Y2 uncorrelated
12. Let two random processes X(t) and Y(t) be denoted by
X(t) = A cos w0t + b sin w0t
Y(t) = B cos w0t – A sin w0t
where A and B are random variables and w0 is a constant. Assume A and B are uncorrelated, zero
mean random variables with same variance. Find the cross-correlation function RXY(t, t + t) and
show that X(t) and Y(t) are jointly wide-sense stationary.
13. Let X(t) be a stationary continuous random process that is differentiable. Denote its time derivative
Ẋ(t).
(a) Show that E[Ẋ(t)] = 0
(b) Find RX˙X˙(t) in terms of RXX(t)
6.104 Probability Theory and Random Processes
14. Statistically independent zero mean random processes X(t) and Y(t) have autocorrelation functions
RXY(t) = e–|t| and RYY(t) = cos (2pt) respectively
(a) Find the autocorrelation function of the sum
W1(t) = X(t) + Y(t)
(b) Find the autocorrelation function of difference
W2(t) = X(t) – Y(t)
(c) Find the cross-correlation function of W1(t) and W2(t)
15. For two random variables X and Y,
fX,Y(x, y) = 0.05 d(x + 1) d(y) + 0.1 d(x) d(y) + 0.1 d(x) d(y – 2)
+ 0.4 d(x – 1) d(y + 2) + 0.2 d(x – 1) + 0.15 (d(x – 1) d(y – 3)
Find (a) the correlation,
(b) the covariance, and
(c) the correlation coefficient of X and Y.
(d) Are X and Y either uncorrelated or orthogonal?
16. If customer arrives at a Counter in accordance with a Poisson process with a mean rate of 2 per
minutes, find the probability that the interval between 2 consecutive arrivals is
(a) more than 1.5 minutes
(b) between 2 and 3 minutes
(c) 4 minutes or less
17. The particles are emitted from a radioactive source at the rate of 10 per hour. Find the probability
that exactly 4 particles are emitted during a 25 minute period.
18. If the customers arrive at a bank according to a Poisson process with mean rate of 5 per minute, find
the probability that during a 1 minute interval no customer arrives.
19. The random binary transmission process X(t) is a wide-sense process with zero-mean and
t
autocorrelation function RXX (t ) = 1 - where T is a constant. Find mean and variance of time
T
average of X(t) over (0, T). Is X(t) mean ergodic?
20. Consider the random process X(t) = 5 sin (wt + f) where w is a constant and f is a random variable
uniformly distributed in (0, 2p). Show that X(t) is autocorrelation ergodic. Verify any two properties
of the auto-correlation function of X(t).
21. Show that the random process X(t) = A cos wt + B sin wt where w is a constant, A and B are
uncorrelated random variables with zero-mean and common variance is stationary in the wide-
sense.
22. Establish the property RXX (t ) £ RXX (0) RYY (0) for stationary processes X(t) and Y(t). Verify
this property for the processes X(t) = A cos (wt + f) and Y(t) = B sin (wt + f) where A, B and w are
constants while f is a random variable uniformly distributed in (0, 2p).
23. The autocorrelation function of a random process X(t) is given by
Ï0 ; t >1
RXX(t) = ÔÌ
ÔÓ5(1 - t ; t £ 1
Is the process is mean ergodic?
24. Calculate the autocorrelation function of the rectangular pulse shown in Fig. 6.26.
Random Processes—Temporal Characteristics 6.105
Fig. 6.26
Fig. 6.27
6.106 Probability Theory and Random Processes
30. A wide-sense stationary random process X(t) has a mean sense value E[X2(t)] = 9. Give reasons why
the functions given below can or cannot be its auto-correlation function
9 cos 2t 9t
(a) RXX (t ) = (b) RXX (t ) =
1+t 2
1 + 2t 2 + 4t 4
t 2 + 36 9 cos (t )
(c) RXX (t ) = (d) RXX (t ) =
t +42 1 + 2t 2 + 4t 4
31. A stationary process has an autocorrelation function
Ï20(1 - | t |) | t | £ k
RXX(t) = Ì
Ó0 |t | > k
Find the largest value of k for which RXX(t) could be a valid auto-correlation function. (Ans. k = 1)
Multiple-choice Questions
1. Which of the following statements is/are correct?
(a) Cross-corrrelation function is an even function.
(b) Autocorrelation function is an even function.
(c) Cross-correlation has maximum value at the origin.
(d) Autocorrelation function is an odd function
2. The autocovariance function of X(t) is given by
(a) E[X(t1) X(t2)] (b) E[X(t1) X(t2)] – E[X(t1)] E[X(t2)]
(c) E[X(t1)] E[X(t2)] (d) E[X(t1) + X(t2)]
3. For a random process X(t) = cos (lt + 0), where q is uniformly distributed in (–p, p). The ACF is
given by
lt 1 1 - cos lt 1 - cos2 lt
(a) cos (b) cos lt (c) (d)
2 2 2 2
4. If X(t) is a Poisson process then the correlation coefficient between X(t) and X(t + t) is
t +t t t t +t
(a) (b) (c) (d)
t t +t t +t t
5. RXX(t) =
(a) RXX(–t) (b) –RXX(t) (c) |RXX(t)| (d) None of these
25t 2 + 36
6. The variance of the stationary process whose ACF as is
6.25t 2 + 4
(a) 6 (b) 5 (c) 10 (d) 9
7. A random process is defined as
Ï A for 0 £ t £ 1
X(t) = Ì
Ó 0 otherwise
Random Processes—Temporal Characteristics 6.107
INTRODUCTION 7.1
In the previous chapter we studied the characterization of a random process in time domain by its mean,
autocorrelation function and covariance functions. In this chapter, we will study the spectral description of
the random process using the Power Spectral Density (PSD).
For a deterministic signal, it is well-known that the spectral information can be obtained using Fourier
transform. However, the Fourier transform cannot be obtained for random signals because for such type of
signals Fourier transform does not exist. Therefore, the frequency domain analysis can be carried out on
a random signal by transforming the autocorrelation function into frequency domain. That is, the Fourier
transforms of autocorrelation function known as power density spectrum can be used to obtain the frequency-
domain representation of random process. Since it requires the knowledge of Fourier transform, a brief
introduction to Fourier transform is given in the next section.
Fourier Transform
Fourier transform of a continuous-time deterministic signal x(t) is given by
•
X(jw) = Ú x(t )e - jw t dt (7.1)
-•
The function X(jw) is called the spectrum of x(t). Since X(jw) is a complex quantity, it has magnitude
|X(jw)| and phase –X(jw). The plot between |X(jw)| versus w is known as magnitude spectrum and the plot
between –X(ejw) versus w is known as phase spectrum. The Fourier transform for a signal exists only if it is
absolutely integrable. That is
•
(7.2)
Ú x(t ) dt < •
-•
Ú xT (t ) dt < •
-T
E ÈÍ XT ( jw ) ˘˙
2
• • •
1 Î ˚ dw = 1
fi PXX =
2p Ú Tlim
Æ• 2T 2p Ú SXX (w ) dw = Ú SX ( f ) df (7.9)
-• -• -•
Random Processes—Spectral Characteristics 7.3
E ÈÍ XT ( jw ) ˘˙
2
E ÈÍ XT ( jw ) ˘˙
2
Property 5: SXX*(w) = SXX(w) where SXX*(w) is the complex conjugate of SXX(w). This means that SXX(w)
cannot be a complex function.
•
Property 6: If Ú | RXX (t )| dt < •, then SXX (w ) is a continuous function of w .
-•
d n X (t )
Property 11: If SXX(w) is PSD of X(t) then PSD of is equal to w2n SXX(w)
dt n
The above property can be obtained by a repeated application of the property
d2
RXX (t ) = - RXX (t )
dt 2
and the differentiation property of Fourier transform.
Property 12: If SXX(w) is PSD of X(t) then the PSD of X (t ) e jw o t with non-random w0 is SXX(w – w0).
Property 13: If SXX(w) is PSD of X(t) then PSD of X(t) cos (w0t + q) with q ~ U(0, 2p) is
1
ÈSXX (w + w 0 ) + SXX (w - w 0 )˘˚
2Î
Random Processes—Spectral Characteristics 7.5
x(t) X(jw)
1
e–at; a > 0, t ≥ 0
a + jw
1
ebt; b > 0; t < 0
b - jw
2a
e–a|t|; a > 0
a + w2
2
1 2p d(w)
d(t) 1
1
te–at, a > 0 t ≥ 0
(a + jw )2
Ê wT ˆ
T sin Á
Ï1 -T /2 < t < T /2 Ë 2 ˜¯
Ì
Ó0 otherwise Ê wT ˆ
ÁË 2 ˜¯
2
È Ê wT ˆ ˘
Í sin ÁË ˙
Ï1 - | t | /T |t |<T 2 ˜¯ ˙
Ì TÍ
Ó 0 otherwise Í Ê wT ˆ ˙
Í ÁË 2 ˜¯ ˙
Î ˚
REVIEW QUESTIONS
7.1 Define power spectral density.
7.2 Define Fourier transform pair,
7.3 Explain in detail the properties of power density spectrum.
7.4 Prove that power density spectrum is real and even function in w.
7.5 What is the condition for the existence of Fourier transform?
7.6 Probability Theory and Random Processes
Solved Problems
w2 w4
(c) SXX(w) = - d (w ) (d)
w +1
4
1 + w 2 + jw 6
Solution
w2
(a) SXX(w) =
w 6 + 3w 2 + 3
( - w )2 w2
SXX(–w) = =
(- w ) + 3(- w ) + 3
6 2
w + 2w 2 + 3
6
SXX(w) = SXX(–w).
Therefore, SXX(w) is an even function of w. SXX(w) is a valid PSD.
2
(b) SXX(w) = e–(w – 1)
2 2
SXX(–w) = e–(–w – 1) = e–(w + 1)
SXX(w) π SXX(–w)
Since SXX(w) is not an even function of ‘w’. Therefore, SXX(w) is not a valid PSD.
w2
(c) SXX(w) = - d (w )
w4 + 1
For w = 0, SXX(w) = –1. Since SXX(w) is non-negative, it is not a valid PSD.
w4
(d) SXX(w) =
1 + w 2 + jw 6
The SXX(w) is not a power spectrum because it is not real valued.
7.2 Determine which of the following functions are power spectrum and, which are not.
cos 2w w2 + 3
(a) SXX(w) = (b) SXX(w) =
1 + 2w + w
2 2
w + 2w 2 - 3
4
Solution
cos 2w
(a) SXX(w) =
1 + 2w 2 + w 4
cos 2(-w ) cos 2w
SXX(–w) = =
1 + 2(- w )2 + (-w )4 1 + 2w 2 + w 4
(2 k + 1) p
It satisfies SXX(w) = SXX(–w). But SXX(w) is not always non-negative. For w = , k = 0, 1
…, the value cos 2w is –1. Hence, SXX(w) is not a valid PSD. 2
Random Processes—Spectral Characteristics 7.7
w2 + 3
(b) SXX(w) =
w 2 + 2w 2 - 3
(-w )2 + 3 w2 + 3
SXX(-w) = =
(-w )4 + 2(-w )2 + 3 w 4 + 2w 2 - 3
SXX(w) = SXX(–w)
SXX(w) is an even function of w.
w2 + 3 w2 + 3
SXX(w) = =
(w 4 + 2w 2 - 3) (w 2 - 1)(w 2 + 3)
SXX(w) is not always non-negative. Hence, SXX(w) is not a valid PSD.
Practice Problem
w2 + 6
(c) SXX(w) = e–|w| (d) SXX(w) =
1 + w 2 - 2w 4
cos(w ) 4w 2
(e) (f)
w 1 + 3w 2 + 4w 4
(Ans: (a) not valid (b) not valid (c) valid (d) not valid (e) not valid (f) valid)
To find the average power of the random process, we take the expected value with T tending to infinity in
the above equation.
T
1
Ú E ÈÎ X (t )˘˚ dt
2
PXX = lim (7.20)
T Æ• 2T -T
E ÈÍ XT ( jw ) ˘˙
2
•
1 Î ˚ dw
=
2p Ú Tlim
Æ• 2 T
-•
•
1
=
2p Ú SXX (w )
-•
E ÈÍ XT ( jw ) ˘˙
2
T T
1 - jw ( t2 - t1 )
= lim
T Æ• 2T Ú Ú E[ X (t1 ) X (t2 )] e dt2 dt1 (7.22)
-T -T
We know E[X(t1) X(t2)] = RXX(t1, t2)
T T
1 - jw ( t2 - t1 )
fi SXX(w) = lim
T Æ• 2T Ú Ú RXX (t1 , t2 ) e dt2 dt1 (7.23)
-T -T
Now the inverse Fourier transform of SXX(w) is
•
1
Ú SXX (w ) e
jwt
F –1[SXX(w)] = dw
2p -•
• T T
1 1 - jw ( t2 - t1 )
=
2p Ú T Æ•
lim
2T Ú Ú RXX (t1 , t2 ) e e jw t dt2 dt1dw
-• -T -T
T T •
1 1 - jw ( t2 - t1 - t )
= lim
T Æ• 2T Ú Ú RXX (t1 , t2 ) 2p Ú e dw dt2 dt1
-T -T -•
T T
1 1
= lim
T Æ• 2T Ú Ú RXX (t1 , t2 ) 2p ÈÎ2p d (t2 - t1 - t )˘˚ dt2 dt1
-T -T
Random Processes—Spectral Characteristics 7.9
T T
1
= lim
T Æ• 2T Ú Ú RXX (t1 , t2 ) d (t2 - t1 - t ) dt2 dt1
-T -T
T
1
= lim
T Æ• 2T Ú RXX (t1 , t1 + t ) dt1
-T
T
1
= lim
T Æ• 2T Ú RXX (t, t + t ) dt
-T
and
•
1
Ú SXX (w ) e
jwt
RXX(t) = dw (7.27)
2p -•
That is, power spectral density is Fourier transform of autocorrelation function. The Eqs. (7.26) and (7.27)
are known as Wiener–Khintchine relations.
REVIEW QUESTIONS
7.6 Prove that PSD and autocorrelation function of a random process form a Fourier transform pair
7.7 Define autocorrelation function.
7.8 State and prove Wiener–Khintchin relations.
Solved Problems
7.3 If RXX(t) = ae–b|t|, find the spectral density function, where a and b are constants.
Solution
Given: RXX(t) = ae–b|t|
The PSD SXX(w) = F[RXX(t)]
7.10 Probability Theory and Random Processes
•
- b|t |
= Ú ae e - jwt dt
-•
È0 • ˘
= a Í Ú a ebt e - jwt dt + Ú e - bt e - jwt dt ˙
ÎÍ -• 0 ˙˚
È0 • ˘
= a Í Ú e(b - jw )t dt + Ú e - (b + jw )t dt ˙
ÎÍ -• 0 ˙˚
0 •
a a
= e(b - jw )t - e - (b + jw )t
b - jw -•
b + jw 0
a a
=
b - jw
[1 - 0 ]-
b + jw
[0 - 1]
a a 2 ab
= + = 2
b - jw b + jw b + w 2
2 ab
SXX (w ) =
b2 + w 2
Practice Problem
Ê 46w 2 + 136 ˆ
Find the power spectral density of the process. Á Ans : 2 ˜
Ë (w + 1)(w 2 + 16) ¯
Solved Problems
7.4 If X(t) is a wide-sense stationary process with PSD SXX(w) then find the power spectrum of Y(t) =
X(t) + X(t – t0).
Solution
Given: Y(t) = X(t) + X(t – t0)
RYY(t) = E[Y(t) Y(t + t)]
= E[{X(t) + X(t – t0)} {X(t + t) + X(t – t0 + t)}]
= E[X(t) X(t + t)] + E[X(t – t0) X (t + t)] + E[X(t) X (t – t0 + t)] + E[X(t – t0) X (t – t0 + t)]
RXX(t) + RXX(t0 + t) + RXX(t – t0) + RXX(t) = 2RXX(t) + RXX(t0 + t) + RXX(t – t0)
Apply Fourier transform on both sides.
Random Processes—Spectral Characteristics 7.11
K K
1 A e jwt
Ú
jwt
= Ae dw =
2p -K
2p jt
-K
A Èe jKt
-e - jKt ˘ A Èe jKt
- e - jKt ˘
= Í ˙= Í ˙
2p ÎÍ jt ˚˙ p t ÍÎ 2j ˚˙
A
= sin ( Kt )
pt
Solution Given:
w2
SXX(w) =
w + 10w 2 + 9
4
w2
=
(w + 1)(w 2 + 9)
2
A B
= +
w +1
2
w +9
2
w2 -1
A = (w2 + 1) =
(w + 1)(w + 9) w 2 = -1
2 2 8
7.12 Probability Theory and Random Processes
w2 -9 9
B = (w2 + 9) = =
(w + 1)(w 2 + 9) w 2 = - 9
2 -8 8
1Ê 1 ˆ 9Ê 1 ˆ
SXX(w) = - +
8 ÁË w 2 + 1˜¯ 8 ÁË w 2 + 9 ˜¯
1 -1 Ê 2 ˆ 9 È 2(3) ˘
RXX(t) = - F Á 2 ˜+ F -1 Í 2 ˙
16 Ë w + 1¯ 8(6) ÍÎ w + 9 ˙˚
1 -t 3 -3 t
= - e + e
16 16
Solution
RXX(t) = F -1[ SXX (w )]
•
1
Ú SXX (w ) e
jwt
= dw
2p -•
1
2
Ê w 2 ˆ jwt
=
2p Ú ÁË1 - 4 ˜¯
e dw
-2
Ï 2
È w 2 e jwt 2w e jwt 2e jwt ˘
2 ¸
1 Ô e jwt 1 Ô
= Ì - Í + - ˙ ˝
2p Ô jt 4 ÍÎ jt t2 jt 3 ˙˚
Ó -2 -2 Ô
˛
1 Ê e j 2t - e - j 2t ˆ 1 È 4 e j 2t - 4 e - j 2t 4 e j 2t + 4 e - j 2t 2 ˘
˜- 4 Í + - 3 ( e j 2t - e - j 2t ) ˙
2p ÁË
= 2
jt ¯ ÍÎ jt t jt ˚˙
1 Ï sin 2t 1 È 4 4 2 ˘¸
= Ì - Í sin 2t + 2 cos 2t - 3 sin 2t ˙ ˝
p Ó t 4 Ît t t ˚˛
1 Ï 1 1 ¸
= Ì sin 2t - 2 cos 2t ˝
p Ó 2t 3 t ˛
Random Processes—Spectral Characteristics 7.13
Practice Problem
7.3 For a random process with power spectrum
SXX(w) = 1 + w2 for |w| < 1
=0 otherwise
Ê 2 Ï sin t cos t sin t ¸ˆ
find the autocorrelation function of the process. ÁË Ans : p Ì t + t 2 - t 3 ˝˜¯
Ó ˛
Solved Problem
7.8 Consider the random process X(t) = A cos (w0t + q) where A and w0 are real constants and q is a
Ê pˆ
random variable uniformly distributed on the interval Á 0, ˜ . Find the average power PXX in X(t).
Ë 2¯
Solution: Given:
X(t) = A cos (w0t + q)
Ê pˆ
‘q’ is uniformly distributed on the interval Á 0, ˜
Ë 2¯
Ï2 p
Ô 0£q £
fi fq(q) = Ì p 2
Ô0 otherwise
Ó
PXX(T) = E[X2(t)]
• p /2
2
ÚX ÚA
2 2
= ( t) fq ( q ) dq = cos2 ( w 0 t + q ) dq
p
-• 0
2 p /2
2A 1 + cos 2( w 0 t + q )
=
p Ú
0
2
dq
A2 È p /2 sin 2(w 0 t + q )
p /2 ˘
= Íq + ˙
p Í 0 2 0 ˙
Î ˚
A2 È p 1 ˘
= + {sin (2w 0 t + p ) - sin 2w 0 t}˙
p ÍÎ 2 2 ˚
A2 È p ˘
= - sin (2w 0 t )˙
p ÍÎ 2 ˚
A2 A2
= - sin (2w 0 t )
2 p
The time average power is
PXX = A[PXX(T)]
7.14 Probability Theory and Random Processes
T
1
= lim
T Æ• 2T Ú PXX (T ) dt
-T
1
T
Ê A2 A2 ˆ
= lim
T Æ• 2T Ú ÁË 2 - p sin 2w 0 t ˜¯ dt
-T
A2
=
2
A2
PXX =
2
Practice Problems
Ê A2 A2 ˆ
Á Ans : (a ) 4 (b) 2 ˜
Ë ¯
7.5 Consider a random process X(t) = cos (wt + q) where w is a real constant and q is a uniform random variable in
Ê pˆ
ÁË 0, 2 ˜¯ . Find the average power in the process.
(Ans. 1/2)
Solved Problem
Êtˆ
7.9 The rectangular function P Á ˜ is shown in Fig. 7.1. Is it a valid autocorrelation function?
ËT¯
Fig. 7.1
jw T jw T
T /2 -
e 2 -e 2
= Ú e - jw t dt =
- T /2
jw
Ê wT ˆ
sin Á
Ë 2 ˜¯
=T
Ê wT ˆ
ÁË 2 ˜¯
From the above equation, we can observe that SXX(w) takes negative value for some w. Since PSD is
Êtˆ
always non-negative, P Á ˜ is not a valid autocorrelation function.
ËT¯
Practice Problems
Fig. 7.2
-|t | -|t |
Ê 1ˆ Ê 1ˆ Ê 6 8 ˆ
7.7 If RXX(t) = 9 Á ˜ + 16 Á ˜ , find SXX(w). Ans : +
Ë 3¯ Ë 4¯ Á 1 1 ˜
ÁË w2 + w2 + ˜
9 16 ¯
Solved Problem
7.10 Given a zero-mean stationary process X(t) and its auto-correlation function RXX(t) and power
spectrum SXX(w), find the autocorrelation and power spectrum of Y(t) = X(t) + a, where a is a non-zero
non-random real constant.
Solution
RYY(t, t + t) = E[Y(t) Y(t + t)]
= E{[X(t) + a] [X(t + t) + a]}
= E[X(t) X(t + t)] + a E [X(t + t)] + a E[X(t)] + E[a2]
= E[X(t) X (t + t)] + E[a2] ∵ E[ X (t )] = 0
= RXX(t) + a2 E[ X (t + t )] = 0
If a is negative,
RYY(t, t + t) = E{[X(t) – a] [X(t + t) – a]}
7.16 Probability Theory and Random Processes
Practice Problem
7.8 If X(t) is a stationary process, with autocorrelation function RXX(t) and power spectrum SXX(w), find the power
spectrum of Y(t) = A + B X(t).
Hint: Proceed in the same way as the above problem. Since E[X(t)] π 0, we get RYY(t) = B2 RXX(t) + 2 AB mX + A2 and
SYY(w) = B2SXX(w) + (2 AB mX + A2) 2p d(w).
Solved Problems
7.11 Given a stationary random process X(t) and its autocorrelation function RXX(t) and power spectrum
SXX(w), find the autocorrelation and power spectrum of Y(t) = X(t) ejw0t where w0 is a non-random real
constant.
7.12 The given power spectrum is of a random process X(t) with a non-random dc term
SXX(w) = 6p d(w) + 1.5p d(w – 2.5p) + 1.5p d(w + 2.5p)+ 2p d(w – 10) + 2p d(w + 10)
(a) What frequencies are included in X(t)? How large are these frequencies in magnitude?
(b) Find the mean value and average power of X(t).
(c) Find the variance of X(t).
Solution Given:
(a) SXX(w) = 6p d(w) + 1.5p d(w – 2.5p) + 1.5p d(w + 2.5p) + 2p d(w – 10) + 2p d(w + 10)
Taking inverse Fourier transform on both sides, we get
RXX(t) = 3 + 1.5 cos (2.5pt) + 2 cos (10t)
Thus, the dc term in RXX(t) = 3
That is, dc power = 3, hence X(t) has dc term 3
The frequencies are w1 = 2.5p and w2 = 10 from which f1 = 1.25 Hz and f2 = 1.592 Hz
The frequency component f1 = 1.25 Hz has a power of
A2
= 1.5 fi A = 1.732
2
Random Processes—Spectral Characteristics 7.17
9
7.13 A wide-sense stationary process X(t) has power spectrum SXX(w) = . (a) Find the average
w +4 2
power of X(t). (b) Find the average power over the frequency band 0 £ w £ 4.
Solution Given:
9 9Ê 4 ˆ
SXX(w) = =
w +42 4 ÁË w 2 + 4 ˜¯
9 -1 È 4 ˘ 9 -2|t |
RXX(t) = F Í 2 ˙= e
4 ÎÍ w + 4 ˚˙ 4
9
Average power PXX = RXX (t ) t = 0 =
4
Power over frequency band 0 £ w £ 4 is equal to
4
1
PXX[0, 4] =
p Ú SXX (w ) dw [Refer Eq. (7.14b)]
0
4 4
1 9 9 1
=
p Ú w 2 + 4 dw = p Ú w 2 + 4 dw
0 0
4
9 1
=
4p Ú Êwˆ
2
dw
0
1+ Á ˜
Ë 2¯
4
9 Êwˆ
= tan -1 Á ˜
2p Ë 2¯ 0
È 9 ˘
= Í tan -1 (2 )˙
Î 2p ˚
7.14 Assume that the ergodic random process X(t) has an autocorrelation function
2
RXX(t) = 18 + [1 + 4 cos (12 t )]
6 + t2
What is the average power?
7.18 Probability Theory and Random Processes
Solution Given:
2
RXX(t) = 18 + [1 + 4 cos (12 t )]
6 + t2
Average power
2
PXX = RXX(0) = 18 + [1 + 4]
6
5
= 18 + = 19.67 watts
3
7.15 The power density spectrum of a WSS process is
SXX(w) = 4p d(w) + 3p d(w – 5p) + 3p d(w + 5p) + 2p d(w – 4) + 2p d(w + 4)
(a) What are the frequencies in X(t)?
(b) Find out mean, variance and average power of X(t).
Solution
(a) Given:
SXX(w) = 4p d(w) + 3p d(w – 5p) + 3p d(w + 5p) + 2p d(w – 4) + 2p d(w + 4)
Taking inverse Fourier transform on both sides, we get
RXX(t) = 2 + 3 cos (5p t) + 2 cos (4t)
The frequencies are w1 = 5p fi f1 = 2.5 Hz
F -1 [2p d (w )] = 1
4
and w2 = 4 fi f2 = Hz F -1 [p d )w - w ) + p d (w + w )]
2p 0 0
= cos w 0 t
(c) The mean value = dc value = 2
That is dc power = 2, hence, X(t) has a dc term 2
fi mX = E[X(t)] = 2
2
Average power E[X (t)] = RXX(0) = 2 + 3 + 2 = 7
sX2 = E[X2] – {E[X]}2
2
= 7 – ( 2) = 5
Practice Problems
Determine (a) the average power, and (b) the autocorrelation function.
Random Processes—Spectral Characteristics 7.19
Solved Problems
Solution
RXX(t) = 10 + 5 cos (2t) + 10e–2|t|
RXX(t) consists of periodic and non-periodic components
RXX(t) = R1(t) + R2(t)
where R1(t) = 5 cos 2t is periodic component and R2(t) is the non-periodic component given by
R2(t) = 10 + 10e–2|t|
A2
The autocorrelation function of A cos (2t + q) is cos 2t.
2
A2
Comparing with R1(t), we get =5
2
A2
Since X(t) = A cos (2t + q) has zero mean, the average power is R1(0) = =5
2
The average power of the non-periodic component is
R2(0) = 10 + 10 = 20
The average power of X(t) is
RXX(0) = 10 + 5 + 10 = 25
Definition 1: Consider the jointly wide-sense stationary process X(t) and Y(t), their sum W(t) = X(t) + Y(t).
The autocorrelation function of W(t) is
RWW(t, t + t) = E{[X(t) + Y(t)] [X(t + t) + Y(t + t)]}
= E[X(t) X(t + t)] + E[Y(t) X(t + t)] + E(X(t) Y(t + t)] + E[Y(t) Y(t + t)]
= RXX(t) + RYX(t) + RXY(t) + RYY(t)
Applying Fourier transform on both sides, we get
SWW(w) = SXX(w) + SYX(w) + SXY(w) + SYY(w)
7.20 Probability Theory and Random Processes
The second and third terms in the above equation are known as cross-power density spectrum defined as
•
- jwt
SXY(w) = F[RXY(t)] = Ú RXY (t ) e dt (7.28)
-•
•
- jwt
SYX(w) = F[RYX(t)] = Ú RYX (t ) e dt (7.29)
-•
By taking inverse Fourier transform, we can obtain cross-correlation function
•
1
Ú SXY (w ) e
jwt
RXY(t) = dw (7.30)
2p -•
•
1
Ú SYX (w ) e
jwt
RYX(t) = dw (7.31)
2p -•
Definition 2: Consider two real random processes X(t) and Y(t). Let xT(t) and yT(t) be truncated ensembles
of the random processes defined as
xT (t ) = x(t ) for - T £ t £ T
=0 otherwise
and
yT (t ) = y(t ) for - T £ t £ T
=0 otherwise
ÍÎT Æ• 2T 2p -• ˚
•
1 1
= Ú lim E È XT ( jw ) YT* ( jw )˘˚ dw (7.35)
2p -• T Æ• 2T Î
where XT (jw), YT (jw) are Fourier transforms of truncated version of x(t) and y(t) respectively.
Equation (7.35) can be written as
•
1
PXY =
2p Ú SXY (w ) dw (7.36)
-•
E[ XT ( jw ) YT* ( jw )]
where SXY(w) = lim (7.37)
T Æ• 2T
and is known as cross-power density spectrum.
From Eq. (7.37)
È XT ( jw ) YT* ( jw ) ˘
SXY(w) = E Í lim ˙
ÍÎT Æ• 2T ˙˚
È 1
T ˘
= E Í lim Ú X (t ) Y (t + t ) e - jw t dt ˙
ÍÎ T Æ• 2T -T ˙˚
= F {A[ E ( X (t ) Y (t + t )]}
{
= F A ÎÈ RXY (t , t + t )˚˘ } (7.38)
That is, the cross-power density spectrum can be defined as the Fourier transform of the time average of
the cross-correlation function.
SXY(w) = F(time average of RXY (t, t + t)]
ÏÔ 1
T ¸Ô
fi SXY(w) = F Ì lim
T Æ• 2T
Ú RXY (t, t + t ) dt ˝
ÓÔ -T ˛Ô
• ÏÔ 1
T ¸Ô - jwt
= Ú ÌTlim
Æ• 2T
Ú RXY (t, t + t ) dt ˝ e dt (7.39)
Ô
-• Ó -T ˛Ô
For a jointly stationary process,
SXY(w) = F[RXY(t)] and SYX(w) = F[RYX(t)] (7.40)
Properties of Cross Power Density Spectrum
Property 1: Unlike auto-power spectrum, the cross-power spectrum need not be real, non-negative or an
even function of w.
7.22 Probability Theory and Random Processes
= SXY(–w)
*
È• - jwt
˘
= Í Ú RXY (t )e dt ˙ = SXY * (w )
ÍÎ - • ˙˚
Property 3: The real part of SXY(w) and real part of SYX (w) are even functions of w
•
SXY(w) = Ú RXY (t )e - jwt dt
-•
•
= Ú RXY (t )[cos wt - j sin wt ]dt
-•
•
Re[SXY(w)] = Ú RXY (t ) cos wt dt
-•
•
Re[SXY(– w)] = Ú RXY (t ) cos(-w )t dt
-•
•
= Ú RXY (t ) cos wt dt
-•
REVIEW QUESTIONS
7.9 Define cross power density spectrum.
7.10 Explain the properties of cross-power density spectrum.
7.11 Prove that SXY(w) = SYX*(w).
Solved Problems
E[ XT* ( jw )YT ( jw )]
7.17 (a) For X(t) and Y(t) random processes, prove that SXY(w) = lim
T Æ• 2T
(b) If RXY(t) = 4u(t)e– a t, find SXY(w).
Solution
(a) Let xT(t) = x(t) for –T £ x £ T
=0 otherwise
7.24 Probability Theory and Random Processes
where
E[ XT* ( jw )YT ( jw )]
SXY(w) = lim
T Æ• 2T
(b) Given:
RXY(t) = 4u(t)e– at
SXY(w) = F[RXY(t)]
• •
= Ú 4u(t )e -at e - jwt dt = 4 Ú e - (a + jw )t dt
-• 0
•
4 -4 4
= e-( a + jw ) t = [0 – 1] =
-( a + jw ) a + jw a + jw
0
Random Processes—Spectral Characteristics 7.25
4
SXY(w) =
a + jw
Practice Problems
7.11 If Y(t) is moving average of X(t) over (t – T, t + T), express SYY(w) in terms of SYX(w). Hence, find the autocorrelation
function of Y(t) in terms of autocorrelation function of X(t).
7.12 The cross-power spectrum of real random processes X(t) and Y(t) is
6
SXY(w) =
(9 + w 2 )(3 + jw )2
Ê e -3|t | ˆ
Find the cross-correlation function. Á Ans : RXY (t ) = 6 {(9t - 1) u (t ) + u(-t )}˜
Ë ¯
Solved Problem
7.18 If X(t) and Y(t) are real random processes, determine which of the following functions can be
valid?
5 2e -2|t |
(a) SXX(w) = 3 (b) SXY(w) = 6 + jw2 (c) SXX(w) =
(3 + 6w ) 1 + 2w 2
Solution
5
(a) SXX(–w) =
3 - 6w 3
Since SXX(– w) π SXX(w), it is not a valid PSD.
(b) From Property 4 of cross power spectral density, we know the imaginary part is and odd function of
w.
In the given problem, SXY(w) is an even function of w. Hence, SXY(w) is not a valid PSD.
(c) Since SXX(w) depends on ‘t’, it is not a valid PSD.
Practice Problem
7.13 If X(t) and Y(t) are real random processes, determine which of the following functions can be valid?
1
(a) SXY(w) = w3 + jw2 (b) SXX(w) =
1 - w2
(c) SXY(w) = 7d(w) (Ans: (a) no, imaginary part must be odd
(b) no (c) yes)
Solved Problems
7.19 Find the cross-spectral density function of two random processes X(t) = A cos (w0t) + B sin (w0t)
and Y(t) = –A sin (w0t) + B cos (w0t). The mean values mX = mY = 0.
The variance of these processes are s2.
7.26 Probability Theory and Random Processes
Solution
RXY(t, t + t) = E[X(t)Y(t + t)]
= E{[A cos (w0t) + B sin (w0t)] [–A sin (w0t + w0t) + B cos (w0t + w0t)]}
= E[–A2 cos (w0t) sin (w0t + w0t)] + E[(–AB) sin(w0t)sin(w0t + w0t)]
+ E[AB cos (w0t) cos (w0t + w0t)] + E[B2 sin (w0t) cos (w0t + w0t)]
Given: E[A] = E[B] = 0
E[AB] = E[A]E[B] = 0
and E[A2] = E[B2] = s2
fi RXY(t, t + t) = s2[sin (w0t) cos (w0t + w0t) – cos(w0t) sin (w0t + w0t)]
= –s2 sin (w0t)
SXY(w) = F[RXY(t)]
= F[–s2 sin (w0t)]
= –s2 jp[d(w + w0) – d(w – w0)]
Practice Problem
7.14 Two jointly stationary random processes X(t) and Y(t) have the cross-correlation function given by
RXY(t) = 2e–2t t ≥ 0.
Ê 2 2 ˆ
Find SXY(w) SYX (w) ÁË Ans : 2 + jw , 2 - jw ˜¯
Solved Problems
7.20 Two independent stationary random processes X(t) and Y(t) have power spectrum densities SXX(w)
16 w2
= 2 and SYY(w) = 2 respectively. Assume X(t) and Y(t) are of zero mean. Find (a) PSD of
w + 16 w + 16
U(t) = X(t) + Y(t), and (b) SXY(w) and SXU(w).
Solution 16 w2
(a) Given SXX(w) = and SYY(w) =
w 2 + 16 w 2 + 16
U(t) = X(t) + Y(t)
SUU(w) = SXX(w) + SYY(w)
16 w2
= + =1
w 2 + 16 w 2 + 16
(b) SXY(w) = F[RXY(t)]
= F[E(X(t)Y(t + t))]
= F{E[X(t)] . E[Y(t + t)]}
Random Processes—Spectral Characteristics 7.27
7.21 For a random process, X(t) = A cos w0t + B sin w0t, find the power density spectrum if A and B are
uncorrelated random variables with zero mean and same variance.
7.22 Let Z(t) = X(t) cos (w0t + y) where X(t) is a zero-mean stationary Gaussian random process with
E[X2(t)] = sX2 (a) If Y is constant, say zero, find E[Z2(t)]; Z(t) is stationary. (b) If y is random variable with
a uniform pdf in the interval (–p, p),find RZZ(t, t + t). Is Z(t) is wide-sense stationary? If so, find the PSD
of Z(t).
Solution
(a) Given y = 0; Z(t) = X(t) cos w0t
E[Z2(t)] = E[X2(t) cos2(w0t)] = sX2cos2w0t
(b) ‘y’ is a random variable with a uniform pdf
1
f Y(y) = for –p £ y £ p
2p
=0 otherwise
RZZ(t, t + t) = E[Z(t)Z(t + t)]
7.28 Probability Theory and Random Processes
1 1È ˘
p p
= ◊ Í Ú cos w 0t dy + Ú cos (2w 0 t + 2 y + w 0t ) dy ˙
2p 2 Í -p -p ˙
Í ˙
Î =0 ˚
1 1
= [2p cos w 0t ] = cos w 0t
4p 2
1
RZZ(t, t + t) = R (t ) cos w 0t
2 XX
Since RZZ(t, t + t) depends on t alone Z(t) is wide-sense stationary
È1 ˘
SZZ(w) = F(RZZ(t)] = F Í RXX (t ) cos w 0t ˙
Î2 ˚
1
= ÈS (w + w 0 ) + SXX (w - w 0 )˘˚
4 Î XX
Fig. 7.3 (a) ACF of white noise; (b) PSD of white noise
Random Processes—Spectral Characteristics 7.29
The term white noise is derived from “white” light, which has a spectrum, that is constant over all variable
light frequencies. White noise is a popular mathematical model of many physical noises. For example, thermal
noise generated by thermal agitation of electrons in any electric conductor can be modelled as white noise.
White noise has the following properties.
(i) The autocorrelation function is an impulse of intensity So.
(ii) The power density spectrum is constant at all frequencies.
(iii) It is physically unrealizable because it possesses infinite average power.
•
1
PNN =
2p Ú SNN (w ) dw = •
-•
(iv) It is completely unpredictable because its future values are uncorrelated with present and past
values.
(v) It is easier to handle because the value at one time is uncorrelated with any other time.
So WSo
= sin W t = sin (W t ) (7.51)
pt p
The bandlimited noise and its power spectral density is shown in Fig. 7.4.
2a So
SNN(w) = (7.53)
a2 + w2
The autocorrelation function and its PSD is shown in Fig. 7.5.
REVIEW QUESTIONS
7.12 Define white noise.
7.13 What is PSD of white noise?
7.14 What are the properties of white noise?
7.15 What is coloured noise?
7.16 Draw the PSD of white noise and coloured noise.
7.17 Draw the PSD of bandlimited white noise.
7.18 What are the properties of white noise?
Solved Problem
7.23 For a wide-sense stationary random process, Y(t) = X(t) + N(t), find the power spectral density,
where X(t) is the actual signal and N(t) is a zero-mean noise process with variance sN2. N(t) is independent
of X(t).
A2
=
2
{F[ RXX (t ) cos(w 0t )}
Since F(RXX(t)] = SXX(w), we can apply Fourier transform properties to get SYY(w).
A2
SYY(w) = [ SXX (w - w 0 ) + SXX (w + w 0 )] (7.58)
2
REVIEW QUESTION
7.19 Find RYY(t) and hence, SYY(w) in terms of SXX(w) for the product device shown in Fig. 7.7, if X(t) is
WSS.
Fig. 7.7
Solved Problems
7.25 Find the PSD of random process X(t) if E[X(t)] = 1 and RXX(t) = 1 + e– a |t|
•
-a |t | Inverse Fourier transform of d(w) is
SXX(w) = Ú {1 + e } e - jwt dt
•
-• 1
Ú d (w )e
jw t
= dw
2p -•
• •
- jwt -a |t | - jwt
= Ú (1)e dt + Úe e dt 1
=
-• -• 2p
2p d (w ) fi Inverse Fourier transforms of 2p d(w) = 1
• Or
= 2pd (w ) + Ú e -a |t | e - jwt dt
-•
Fourier transforms of 1 is 2p d(w)
Consider
• 0 •
1 1 2a
= + =
a - jw a + jw a 2 + w 2
2a
fi SXX(w) = 2p d(w) +
a2 + w2
Practice Problem
Êwˆ
SXX(w) = D Á ˜ Fig. 7.8
ËW¯
Solved Problems
7.26 Find the power spectral density if the autocorrelation function of a WSS process is RXX(t) = ke–k|t|
Solution
SXX(w) = F[RXX(t)
= F[k e– k|t|] = kF [e–k|t|]
•
- k |t | - jwt
È0 • ˘
=k Úe e dt = k Í Ú e kt e - jwt dt + Ú e - kt e - jwt dt ˙
-• ÍÎ - • 0 ˚˙
È 1 0
1
•˘
=kÍ e( k - jw )t + e - ( k + jw )t ˙
ÎÍ k - jw -•
-(k + jw ) 0 ˚˙
È 1 1 ˘
=kÍ (1) - (-1)˙
Î k - jw k + jw ˚
2k 2k 2 2
=k = =
k +w 2 2
k +w
2 2
Êwˆ
2
1+ Á ˜
Ëk¯
A2 A2
7.27 Find the cross-power spectral density if (a) RXY(t) = sin w 0t , and (b) RXY(t) = cos w 0t .
2 2
Solution
A2
(a) Given RXY(t) = sin w 0t
2
È A2 ˘
SXY(w) = F[RXY(t)] = F Í sin w 0t ˙ •
Î 2 ˚ 1
Ú d (w - w 0 ) e
jwt
F –1[d(w – w0)]= dw
2 2p -•
A
= F ÈÎsin w 0t ˘˚
2 e jw 0t
=
A2 È e jw 0t - e - jw 0t ˘ 2p
= FÍ ˙ fi F[e jw 0t ] = 2p d(w – w0)
2 Î 2j ˚
A2 È
= Î F (e
jw 0t
) - F (e - jw 0t )˘˚
4j
A2
= È2p d (w - w 0 ) - 2p d (w + w 0 )˘˚
4j Î
jA2
= Èp d (w + w 0 ) - p d (w - w 0 )˘˚
2 Î
Random Processes—Spectral Characteristics 7.35
A2
(b) RXX(t) = cos(w 0t )
2
È A2 ˘
SXX(w) = F Í cos w 0t ˙
Î 2 ˚
A2 È e jw 0t + e - jw 0t ˘
= FÍ ˙
2 Î 2 ˚
A2 È
= Î F (e
jw 0t
) + F (e - jw 0t )˘˚
4
A2
= È2p d (w - w 0 ) + 2p d (w + w 0 )˚˘
4 Î
A2 p
= Èd (w - w 0 ) + d (w + w 0 )˚˘
2 Î
A process is said to be bandpass process for which the power is centred around some non-zero frequency
w = w0. For a bandpass process, the bandwidth is given by B = BR – BL, where BR is the smallest value of B
such that SXX (w) = 0 for all B < w, and BL be the largest value of B such that SXX(w) = 0 for all 0 < w < B.
The PSD of a bandpass process is shown in Fig. 7.10.
Úw
2
SXX (w )dw
-•
Úw
2
SXX (w )dw
-•
rms bandwidth = •
(7.59)
Ú SXX (w )dw
-•
Solved Problems
Solution
kW
Aw 2
• Ú Êwˆ
2
dw
Ú w 2 SXX (w )dw - kW
1+ Á ˜
ËW¯
-•
rms bandwidth = •
= kW
A
Ú SXX (w )dw Ú Êwˆ
2
dw
-• - kW
1+ Á ˜
ËW¯
kW
Aw 2
Let N = Ú Êwˆ
2
dw
Let
w
=p
- kW
1+ Á ˜ W
ËW¯
fi dw = (dp)W
kW
A w2 = p2 W2
and D = Ú Êwˆ
2
dw
If w = kW, p = k
- kW
1+ Á ˜
ËW¯ and w = –kW, p = –k
Random Processes—Spectral Characteristics 7.37
kW
Aw 2
D = Ú Êwˆ
2
dw
- kW
1+ Á ˜
ËW¯
k
Ap2W 2
N= Ú 1 + p2
(Wdp)
-k
p2
= AW 3 Ú 1 + p2 dp
Èk Ê 1 ˆ ˘
= AW 3 Í Ú Á 1 - dp ˙
ÍÎ - k Ë 1 + p2 ˜¯ ˙˚
= AW 3 ÈÎ2 k - 2 tan -1 k ˘˚
kW k
A AW -1
D = Ú Êwˆ
2
dw = Ú 1 + p2 dp = AW tan p |k- k
- kW -k
1+ Á ˜
ËW¯
= 2AW tan–1k
N AW 3 (2 k - 2 tan -1 k ) È -1 ˘
2 k - tan k
rms bandwidth = = = W Í ˙
D 2 AW tan -1 k Î tan -1 k ˚
1
7.29 Find rms bandwidth for the PSD SXX(w) = 3 .
È Ê w ˆ2˘
Í1 + Á ˜ ˙
Î ËW¯ ˚
Solution
•
w
Úw
2
SXX (w )dw Let =P
-• W
rms bandwidth = • Then dw = Wdp
Ú SXX (w )dw
-•
• •
w2
Úw SXX (w ) dw = Ú
2
3
-• -• È Ê w ˆ2˘
Í1 + Á ˜ ˙
Î ËW¯ ˚
7.38 Probability Theory and Random Processes
• •
W 2 p2 p2
Ú (1 + p2 )3 Wdp = W Ú (1 + p2 ) dp
3
Let p = tan q
-• -•
p
dp = sec2q dq
2
tan 2 q sec 2 q dq p
= W3 Ú (1 + tan q ) 2
when p = •, q =
2
-p
-p
2
p = – •, q =
p
2
2
= W3
Ú sin 2 q cos2 q dq
-p
2
p
2 p /2
W 3
W3 Ê 1 - cos 2q ˆ W3
=
4 Ú sin 2 2q dq =
4 Ú ÁË
2
˜¯ dq =
8
p
-p -p
2 2
• • •
1 Wdp
Ú SXX (w )dw = Ú dw = Ú
- • (1 + p )
3 2 3
-• -• È Êwˆ 2˘
Í1 + Á ˜ ˙
Î ËW¯ ˚
•
dp
=W Ú (1 + p2 )3
-•
p 1
2
sec 2 q dq
cos4q =
4
[1 + cos 2q ]2
=W Ú (1 + tan 2 q )3
-p 1È
= Î1 + cos 2q + 2 cos 2q ˘˚
2
2
4
p
2 3 1 1
= + cos 4q + cos 2q
=W Ú cos4 q dq 8 8 2
-p
2
p
2
Ê3 1 1 ˆ
=W Ú
ÁË + cos 4 q + cos 2q ˜¯ dq
8 8 2
-p
2
Ê3 ˆ
= W Á p˜
Ë8 ¯
(W 3p /8) W 2
rms bandwidth = =
(2W p /8) 3
Random Processes—Spectral Characteristics 7.39
7.30 Find rms bandwidth and average power for the power spectrum
Ê w2 ˆ
SXX = Á 4 - ˜ for |w| < 6
Ë 9 ¯
= 0 otherwise
Solution
•
Úw
2
SXX (w ) dw
-•
rms bandwidth = •
Ú SXX (w ) dw
-•
• 6
Ê w2 ˆ
Úw SXX (w ) dw = Úw ÁË 4 -
2 2
˜ dw
-• -6
9 ¯
4 1
= (2 ¥ 63 ) - (2 ¥ 65 )
3 45
Ê 4 36 ˆ 8
= 2 ¥ 63 Á - ˜ = ( 2 ¥ 63 )
Ë 3 45 ¯ 15
• 6
Ê w2 ˆ 1
Ú SXX (w )dw = Ú ÁË 4 - ˜ dw = 4 (12) -
9 ¯ 27
(2 ¥ 63 ) = 32
-• -6
8
(2 ¥ 63 )
15 36
rms bandwidth = =
32 5
Average power
•
1 32 16
PXX =
2p Ú SXX (w ) dw = =
2p p
-•
7.31 Prove that the rms bandwidth can be obtained using the relation
1 d 2 RXX (t )
- (7.60)
RXX (0) dt 2 t =0
Solution We know
•
Úw
2
SXX (w ) dw
-•
rms bandwidth = •
Ú SXX (w ) dw
-•
7.40 Probability Theory and Random Processes
•
1
Also, RXX(t) =
2p Ú SXX (w ) e jwt dw
-•
Differentiating the above equation with respect to t,
•
dRXX (t ) = 1
Ú ( jw ) SXX (w )e
jwt
dw
dt 2p -•
Similarly,
•
d 2 RXX (t ) 1
Ú (- w )SXX (w ) e jwt dw
2
= 2p
dt -•
•
d 2 RXX (t ) 1
Ú (- w
2
= )SXX (w ) dw
dt t =0 2p -•
•
1
RXX(0) =
2p Ú SXX (w )dw
-•
•
d 2 RXX (t )
Úw
2
- SXX (w ) dw
dt t =0 -•
fi = •
RXX (0)
Ú SXX (w ) dw
-•
Practice Problems
Ê pW ˆ
which can be treated as a bandpass process. Find the mean frequency and rms bandwidth ÁË Ans. , •˜
2 ¯
7.17 Find the rms bandwidth of the power spectrum
w2
SXX(w) = 4
(Ans. W2)
È Ê w ˆ2˘
Í1 + Á ˜ ˙
ÍÎ Ë W ¯ ˙˚
Random Processes—Spectral Characteristics 7.41
An ensemble of random process X(n) exists for all values of n. Hence, DTFT does not exist for a random
process. Therefore, to obtain spectral information of a random process, we transform autocorrelation function
into frequency domain. That is, the PSD of a discrete-time random process is
•
SXX(W) = Â RXX (m) e - jWm (7.64)
m = -•
Since e–jWm is periodic with 2p, we can prove that SXX(W) is also periodic with a period 2p.
•
SXX(W + 2p) = Â RXX (m) e - j (W + 2p ) m (7.65)
m = -•
•
= Â RXX (m) e - jWm
m = -•
= SXX(W)
Therefore, it is sufficient to define SXX(W) only in the range (–p, p). The autocorrelation function is given
by
p
1 jWm
RXX(m) =
2p Ú SXX (W) e dW (7.66)
-p
• •
SXX(– W) = Â RXX (m) e - j ( - W ) m = Â RXX (m) e j Wm
m = -• m = -•
7.42 Probability Theory and Random Processes
•
= Â RXX (m) e - j W ( - m )
m = -•
Let m = –p
•
= Â RXX (- p) e - j Wp
p = -•
= SXX(W)
(i) SXX(W) is real, which means that SXX(W) > 0.
(ii) The average power of the process is
p
1
E[X2(n)] = RXX(0) =
2p Ú SXX (W) dW (7.67)
-p
REVIEW QUESTIONS
20. Define PSD of a discrete-time random process.
21. Prove that SXX(W) is a periodic with period 2p.
Solved Problems
7.32 A random process X(n) has auto-correlation function RXX(m) = a|m|. Find the power spectral
density.
Solution
•
SXX(W) = Â RXX (m) e - jWm
m = -•
-1 •
= Â RXX (m) e - jWm + Â RXX (m) e- jWm
m = -• m=0
-1 •
= Â a - m e - jWm + Â a m e- jWm
m = -• m=0
• •
= Â a m e jWm + Â a m e- jWm
m =1 m=0
ae jW 1
= +
1 - ae jW 1 - ae - jW
Random Processes—Spectral Characteristics 7.43
ae jW - a 2 + 1 - ae jW
=
1 - ae jW - ae - jW - a 2
1 - a2
=
1 - 2 a cos W + a 2
•
Solution SXX(W) = Â RXX (m) e - jWm
m = -•
4s x2 4s x2 4s x2 4s x2
= ... 2 2
e j 3W + 2
e jW + s x2 + 2
e - jW + 2 2
e -3W + ...
3 p p p 3 p
4s x2 È 2 ˘
= s x2 + 2 Í2 cos W + 2 cos3W + ...˙
p Î 3 ˚
8s x2 È 1 1 ˘
= s x2 + Ícos W + 2 cos3W + 2 cos 5W + ...˙
p2 Î 3 5 ˚
Solution Given:
RXX(k) = 9(a)|k| + 25(b)|k|
SXX(W) = F[RXX(k)]
= F[9(a)|k| + 25(b)|k|]
• -1 •
F ÈÎ(a )|k | ˘˚ = Â (a ) k e- jWk = Â a - k e - jWk + Â a k e- jWk
k =a k = -a k=0
• •
= Â (a )k e j W k + Â (a e- j W )k
k =1 k=0
a e jW 1 1 - a2
= jW
+ - jW
=
1 - ae 1 - ae 1 - 2a cos W + a 2
7.44 Probability Theory and Random Processes
È 1 - a2 ˘ È 1 - b2 ˘
SXX(W) = 9 Í 2˙
+ 25 Í 2˙
ÍÎ 1 - 2a cos W + a ˙˚ ÎÍ 1 - 2 b cos W + b ˚˙
7.35 A discrete-time random process Y(n) is defined by Y(n) = X(n) + a X(n – 1), where X(n) is the white
noise process with zero mean and variance sX2. Find the mean, autocorrelation and the PSD of the random
process Y(n).
Solution Given:
Y(n) = X(n) + a X(n – 1)
E[X(n)] = 0
Var [X(n)] = sX2
Since X(n) is a white-noise process,
ÏÔs 2 , m = 0
RXX(m) = Ì X
ÔÓ 0 m π 0
The mean of Y(n) is
E[y(n)] = E[X(n) + a X (n – 1)]
= E[X(n)] + a E [X(n – 1)]
=0
The variance of Y(n) is
s2Y = Var[X(n) + a X (n – 1)]
= sX2 + a2 sX2 = sX2 (1 + a2)
The autocorrelation
RYY(m) = E[Y(n)Y(n + m)] = E{[X(n) + a X (n – 1)][X(n + m) + a X (n – 1 + m)]}
= E[X(n) X (n + m)] + a E [X(n – 1) X (n + m)] + a E [X(n) X (n – 1 + m]
+ a 2 E [X(n – 1) X (n – 1 + m)]
For m = 0, RYY(m)= sX2 + a2sX2
For m = 1, RYY(m)= a RXX(m) = asX2
For m = –1, RYY(m) = a RXX(m) = a sX2
Ïs X2 (1 + a 2 ) for m = 0
ÔÔ
RXX(m) = Ì a s X2 for m = ± 1
Ô 0 otherwise
ÔÓ
The PSD is given by
•
SXX(W) = Â RXX (m) e - jWm
m = -•
= as X2 e j Wm + s X2 (1 + a 2 ) + as X2 e - jWm
= sX2 [(1 + a) + 2a cos W]
Random Processes—Spectral Characteristics 7.45
7.36 Consider a discrete-time random process X(n) = cos(W0n + q) where q is a uniformly distributed
]random variable in the interval (0, 2p). Find SXX(W).
Solution Given X(n) = cos (W0n + q)
RXX(n, n + m) = E[X(n) X (n + m)]
= E[cos (W0n + q) cos (W0n + W0m + q)]
1
= E [cos(2W0 n + W0 m + 2q ) + cos(W0 m)]
2
Since q is uniformly distributed in the interval (0, 2p),
E[cos(2W0n + W0m + 2q)]
2p
1
= Ú cos(2W0 n + W0 m + 2q ) 2p dq
0
2p
1 sin(W0 n + W0 m + 2q )
=
2p 2 0
=0
1
fi RXX(n, n + m) = cos (W0 m)
2
È1 ˘
SXX(W) = F Í cos W0 m ˙
Î 2 ˚
1
= F [cos W0 m]
2
1
= Èp d (W + W0 ) + p d (W - W0 )˘˚
2Î
Practice Problems
ÔÏs 2 K=0
7.18 If ACF of white noise N(t) is RNN(KT) = Ì N , then find PSD. (Ans. s 2N)
ÔÓ 0 Kπ0
7.19 Let Y(n) = X(n) – X(n – T). If X(n) is a zero mean WSS, find RYY(m), SYY(W).
(Ans. RYY (m) = 2RXX(m) – RXX(m + T) – RXX (m – T)
SYY(W) = 2SXX(W) (1 – cos WT))
Solved Problems
N
7.37 Consider a random process Z(t) = Â zi ◊ e jw t , zi, i = 1, …N be N complex zero-mean uncorrelated
i =1
random variables with variance sZ2, and wi are the angular frequencies. Find the power spectral density.
Solution Given:
N
Z(t) = Â zi ◊ e jw t i
i =1
= s z2i i = j
RZZ(t) = E[Z*(t) Z(t + t)]
ÈN N
jw j ( t + t ) ˘
= E Í Â zi* e - jwi t  zj e ˙
ÍÎi = 1 j =1 ˙˚
N N
j (w j - w i )t
= Â Â E[ zi* z j ] e e jwit
i =1 j =1
N
= Â s z2 i
e jwit
i =1
ÈN ˘
SZZ(w) = F Í Â s z2i e jwit ˙
ÍÎi = 1 ˙˚
N N
= Â s z2 i
2p d(w – wi) = 2p  s z2i d (w - w i )
i =1 i =1
Random Processes—Spectral Characteristics 7.47
7.38 Let X(t) and Y(t) be statistically independent processes with power spectrum
1 6
SXX(w) = 2 d(w) + 2
and SY Y ( w ) = 2
Êwˆ Êwˆ
1+ Á ˜ 1+ Á ˜
Ë 9¯ Ë 3¯
A complex process Z(t) = [X(t) + jY(t)] ejw0t is defined where w0 is constant much larger than 9. Find ACF
and PSD.
Solution Given:
SXX(w) = 2d (w ) + 1
2
Êwˆ
1+ Á ˜
Ë 9¯
We know
F[1] = 2p d(w)
1
fi F–1[2d(w)] =
p
2a
Also, F[e–a|t|] =
a2 + w2
from which
È 1 ˘ 9 È 6 ˘
F -1 Í = e - 9|t | and F -1 = 9e
-3 t
2˙ 2 Í 2˙
Í1 + Ê w ˆ ˙ Í1 + Ê w ˆ ˙
ÍÎ ÁË 9 ˜¯ ˙˚ ÍÎ ÁË 3 ˜¯ ˙˚
1 9 - 9|t |
fi RXX(t) = + e and RYY(t) = 9e–3|t|
p 2
The complex process Z(t) = [X(t) + jY(t)]ejw0t
RZZ(t) = E[Z*(t) Z (t + t)]
È1 9 ˘
= Í + e - 9|t | + 9e - 3|t | ˙ e jw 0t
Îp 2 ˚
7.48 Probability Theory and Random Processes
81 54
= 2 d(w – w0) + +
81 + (w - w 0 )2 9 + (w - w 0 )2
7.39 Find the auto-correlation function if the power spectral density of a stationary random process is
given by
ÏA - k < w < k
SXX(w) = Ì
Ó 0 otherwise
A -k < w < k
Solution SXX(w) =
0 otherwise
k
A e jwt A Ê e jt k - e - jt k ˆ
= =
2p jt -k
2p ÁË jt ˜
¯
Ak e jt k - e - jt k
=
p 2 jt k
Ak sin(kt )
=
p kt
7.40 The cross-spectral density of two random process X(t) and Y(t) is given by
1
SXY(w) =
-w 2 + j 4 w + 4
Find the cross-correlation function.
1 1
Solution SXY(w) = =
-w + j 4 w + 4
2
(2 + jw )2
RXY(t) = F –1[SXY(w)]
È 1 ˘
= F -1 Í 2˙
Î (2 + jw ) ˚
= te–2tu(t)
Random Processes—Spectral Characteristics 7.49
7.41 Find the average power of the random process with the following power spectrum
w 2 + 14
SXX(w) =
(w + 25)(w 2 + 36)
2
w 2 + 14
Solution SXX(w) =
(w 2 + 25)(w 2 + 36)
RXX(t) = F–1[SXX(w)]
È w 2 + 14 ˘ -1 È -1 2 ˘
= F -1 Í 2 ˙=F Í 2 + 2 ˙
ÍÎ (w + 25)(w + 36) ˙˚ Î w + 25 w + 36 ˚
2
- 1 È -1 Ê 10 ˆ ˘ 2 -1 Ê 12 ˆ
= ÍF Á 2 ˙+ F Á 2
10 ÎÍ Ë w + 25 ˜¯ ˚˙ 12 Ë w + 36 ˜¯
- 1 - 5|t | 1 - 6|t |
= e + e
10 6
The average power
-1 1 1
RXX(0) = + = W
10 6 15
È w2 ˘
= F -1 Í ˙
ÍÎ (w + 9) (w + 4) ˙˚
4 2
È -4/5 9/5 ˘
= F -1 Í 2 + 2 ˙
Îw + 4 w + 9˚
2a
Using F[e–a|t|] = we can find
a + w2
2
7.50 Probability Theory and Random Processes
1 -4|t | 3 -3|t |
RXX(t) = - e + e
5 10
The average power
1 3 1
RXX(0) = - + = W
5 10 10
Practice Problems
2
w +9
7.20 Given the power spectral density of a continuous process as SXX(w) = 4 . Find the mean square value
of the process. w + 5w 2 + 4
È 4 -t 5 -2 t ˘
Í Ans. 3 e - 20 e ˙
Î ˚
1
7.21 Given the power spectral density SXX(w) = . Find the average power of the process. (Ans. 1/4)
4 + w2
Solved Problem
b È ˘
a 0 a
1 b
=
2p aÚ(a - | w |) e jwt dw = Í Ú (a + w )e jwt dw + Ú (a - w ) e jwt dw ˙
2 ap Í - a ˙˚
-a Î 0
È 0 0 0 a a a˘
b Í a jwt w e jwt e jwt a jwt w e jwt e jwt ˙
= e + + + e - -
2ap Í jt jt t2 jt jt t2 ˙
0˚
Î -a -a -a 0 0
2b at
= 2
sin 2
apt 2
Practice Problem
7.22 Find the autocorrelation function of a stationary random process whose power spectral density is given by
ÏÔw 2 for w £ 1 È 1 Ê sin t 2 cos t 2 sin t ˆ ˘
Sxx(w) = Ì Í Ans. Á + - ˜˙
ÔÓ0 for w > 1 Î pË t t2 t3 ¯˚
Solved Problems
7.44 Find the PSD of a stationary random process for which auto-correlation is RXX(t) = e–a|t|.
0 •
= Ú eat e - jwt dt + Ú e - at e - jwt dt
-• 0
1 1 2a
= + =
a - jw a + jw a 2 + w 2
w2
7.45 A random process has the power density spectrum SXX(w) = . Find the average power in
the process. 1 + w2
Solution Given:
w2 1
SXX(w) = =1-
1+w 2
1 + w2
1 2
= 1- ◊
2 1 + w2
RXX(t) = F–1[SXX(w)]
È 1 2 ˘
= F -1 Í1 - ◊
Î 2 1 + w 2 ˙˚
1 -|t |
= d (t ) - e
2
7.52 Probability Theory and Random Processes
1 1
Average power = RXX(0) = 1 - =
2 2
Practice Problem
6w 2
7.23 A random process has the power density spectrum SXX(w) = . Find average power in the process.
1 + w4
Ê 3 ˆ
ÁË Ans : ˜
2¯
Solved Problems
7.46 If the random process X(t) represents a white normal noise, find the auto-correlation function of
t
Y(t) = Ú X (t ) dt (7.72)
0
t1 t2
= Ú Ú E[ X (t ) X (s)] ds dt
0 0
t1 t2
= Ú Ú RXX (t , s) ds dt (7.74)
0 0
Ú Ús d (t - s) dt ds
2
RYY(t1, t2) =
0 0
t2
= s Ú u(t - s) dt
2
0
min( t1 , t2 )
Ú dt = s 2 min (t1 , t2 )
2
=s
0
Random Processes—Spectral Characteristics 7.53
p
1 2 5
= 5 q = = 2.5
2p -
p 2
2
25
Given: SXX(w) =
w + 25
2
25 Ê 10 ˆ 5 2(5)
= = ◊ 2
10 ÁË w 2 + 25 ˜¯ 2 w + 25
-1 È 5 2(5) ˘
RXX(t) = F Í ˙
Î 2 w 2
+ 25 ˚
5 -5|t |
= e
2
5
RXX(0) =
2
7.54 Probability Theory and Random Processes
7.48 The cross-power spectrum of a real random processes X(t) and Y(t) is given by
Ïa + jbw |w | <1
SXY(w) = Ì
Ó 0 otherwise
Find the cross-correlation function.
Solution We know
•
1
Ú SXY (w ) e
jwt
RXX(t) = dw
2p -•
1
1
Ú (a + jbw )e
jwt
= dw
2p -1
Ïa 1 Èw 1
1 jwt ˘ ¸Ô
1
1Ô jwt jwt
= Ì e + jb Í e + e ˙˝
2pÔÓ jt -1 ÍÎ jt -1 t2 -1 ˙
˚ Ô˛
1 ÔÏ a jt È 1 jt - jt ˘ jb jt ¸
- jt Ô
Ì (e - e ) + jb Í (e + e )˙ + 2 ÈÎe - e ˘˚ ˝
- jt
=
2p ÓÔ jt Î jt ˚ t ˛Ô
1 È 2a 2b 2b ˘
= sin t + cos t - 2 sin t ˙
2p ÍÎ t t t ˚
1
= [at sin t - b sin t + b t cos t ]
pt 2
7.49 A WSS noise process N(t) has ACF RNN(t) = Pe–3|t|. Find PSD and plot both ACF and PSD.
È 0 • ˘
= P Í Ú e(3 - jw )t dt + Ú e - (3 + jw )t dt ˙
ÍÎ - • 0 ˙˚
È 1 0
1
•˘
= PÍ e(3 - jw )t + e - (3 + jw ) ˙
ÍÎ 3 - jw -•
-(3 + jw ) 0 ˙˚
È 1 1 ˘ 6P
= PÍ + ˙ = 2
Î 3 - jw 3 + jw ˚ w + 9
Random Processes—Spectral Characteristics 7.55
Practice Problem
2
-t /2a 2
7.24 Find the power spectral density of a random process whose autocorrelation is given by RXX(t) = e , a > 0.
2 2
(Ans: a 2p e -w a /2 )
Solved Problems
Solution We know
-1
RXX(t) = F [ SXX (w )]
•
1
Ú SXX (w )e
jwt
= dw
2p -•
1 1
1 1 e jw r
Úpe
jwt
= dw =
2p -1
2 jt -1
1
= (e jt - e - jt )
2 jt
sint
=
t
sint
RXX(t) =
t
7.51 A random process has autocorrelation function RXX(t) = 5e–4|t| – e–2|t| cos(3pt) + cos(4pt). Find
its power spectrum.
Solution Given: RXX(t) = 5e–4|t| – e–2|t| cos(3pt) + cos(4pt)
SXX(w) = F[RXX(t)]
= 5F[e–4|t|] – F[e–2|t| cos 3 pt] + F[cos 4pt]
2a
We know F[e–a|t|] =
a + w2
2
7.52 Find the autocorrelation and power spectrum of random process X(t) = Y sin w0t + Z cos w0t, where
Y ans Z are independent zero-mean random variables with the same variance s2.
Solution
RXX(t) = E[X(t) X (t + t)]
= E{[Y sin w0 t + Z cos w0t] [Y sin w0(t + t) + Z cos w0(t + t)]
= E[Y2 sin w0t sin w0 (t + t)] + E[ZY cos (w0 t) sin w0(t + t)]
+ E[(Y sin w0t) Z cos w0(t + t)] + E[Z2 cos (w0t) cos w0(t + t)]
= E[Y2] sin w0t sin w0(t + t) + E[YZ] {cos w0t sin w0(t + t)
+ sin (w0t) cos w0 (t + t)} + E[Z2] cos w0t cos w0(t + t)
E[YZ] = E[Y]E[Z] = 0
E[Y2] = E[Z2] = s2
RXX(t) = s2[sin w0t sin w0(t + t) + cos w0t cos w0 (t + t)]
= s2 cos w0t
SXX(w) = F[RXX(t)]
= s2p[d(w – w0) + d(w + w0)]
7.53 Find the power spectrum SXX(w) and average power PXX of the random process X(t) = A + B cos
(w0t + q). A and B are independent random variables, w0 > 0 is a constant and q is a uniformly distributed
random variable in the interval 0 to 2p.
Solution Given:
X(t) = A + B cos (w0t + q)
RXX(t, t + t) = E[X(t)X(t + t)]
= E{[A + B cos (w0t + q)] [A + B cos (w0t + w0t + q)]
= E[A2] + E[A] E[B] E [cos (w0t + q)] + E[A] E[B] E[cos (w0t + w0t + q)]
+ E[B2] E{[cos (w0t + q)] [cos (w0t + w0t + q)]
Since q is a uniformly distributed random variable in the interval (0, 2p), we have
E[cos w0t + q)] = 0
E[cos (w0t +w0t + q)] = 0
E{[cos(w0t + q)] [cos(w0t +w0t + q)]
1
= E Ècos(w 0t ) + cos(2 w 0 t + w 0t + 2q )˘˚
2 Î
Random Processes—Spectral Characteristics 7.57
1 1
= E [cos(w 0t )] + E [cos(2 w 0 t + w 0t + 2q )]
2 2
zero
Therefore,
1
RXX(t, t + t) = E[A2] + E [ B2 ]cos w 0t = RXX(t)
2
SXX(w) = F[RXX(t)]
1
= F{E[A2] + E [ B2 ]cos(w 0t ) }
2
1
= E[A2]F[1] + E [ B2 ] F [cos (w0t)]
2
p
= 2p d(w) E[A2] + E[ B2 ] [ d (w – w0) + d (w + w0)]
2
1
PXX = RXX(0) = E[A2] + E[ B2 ]
2
2
/2a 2
7.54 For a random process X(t), assume that RXX(t) = P e -t where P > 0 and a > 0 are constants.
Find the power density spectrum of X(t).
2
/2a 2
Solution RXX(t) = P e -t
•
2
/2a 2 - jwt
SXX(w) = Ú P e -t e dt
-•
Ê t2 ˆ
• - Á 2 + jwt ˜
Ë 2a ¯
= P Ú e dt
-•
Ê t2 w 2 a2 ˆ
• - Á 2 + jwt - ˜
-w 2 a 2 /2 Ë 2a 2 ¯
= Pe Úe dt
-•
2
• Ê t jw a ˆ
2 2 -Á + ˜
Ë 2a 2 ¯
= P e -w a /2
Úe dt
-•
t jw a
Let + =z
2a 2
Then dt = 2 adz
• •
- z2
Úe
2 2 2
SXX(w) = P e -w a /2
2a Ú e - z dz dz = p
-• -•
7.58 Probability Theory and Random Processes
2 2
SXX(w) = 2p Pa e - w a /2
7.55 Given a random process Y(t) = X(t) cos w0t, where w0 is constant and X(t) is a stationary random
process with power spectrum SXX(w), find the power spectrum of Y(t).
Solution
157 + 12w 2
(a) Given: SXX(w) =
(16 + w 2 )(9 + w 2 )
RXX(t) = F–1[SXX(w)]
157 + 12w 2
SXX(w) =
(16 + w 2 )(9 + w 2 )
A B
= +
16 + w 2
9 + w2
5 7
= +
16 + w 2 9 + w2
È 5 ˘ È 7 ˘
RXX(t) = F -1 Í + F -1
2˙ Í 2˙
Î 16 + w ˚ Î9 + w ˚
We have
2a
F[e–a|t|] =
a2 + w 2
5 Ê 8 ˆ 7 6
SXX(w) = ◊Á 2 ˜ +
8 Ë w + 16 ¯ 6 9 + w 2
5 -1 È 8 ˘ 7 -1 È 6 ˘
RXX(t) = F Í 2 ˙+ F Í 2˙
8 Î w + 16 ˚ 6 Î9 + w ˚
5 -4|t | 7 -3|t |
= e + e
8 6
8
(b) SXX(w) =
(9 + w 2 )2
È 2˘
8 È 62 ˘ 2 ÍÊ 62 ˆ ˙
= Í ˙ =
36 ÎÍ (9 + w 2 )2 ˚˙ 9 ÍÎÁË 9 + w 2 ˜¯ ˙˚
È 6 ˘
F -1 Í 2˙ =e
–3|t|
Î 9 + w ˚
2
RXX(t) = Èe -3|t | * e -3|t | ˘
9Î ˚
Ê pˆ
7.57 Consider two random processes X(t) = 3 cos (wt + q) and Y(t) = 2 cos Á wt + q - ˜ where q is a
Ë 2¯
random variable uniformly distributed in (0, 2p). Prove that RXX (0) RYY (0) ≥ | RXY (t ) | .
7.60 Probability Theory and Random Processes
9 9
fi RXX(t) = E [cos wt ] = cos wt
2 2
RYY(t) = E[Y(t)Y(t + t)]
È Ê pˆ Ê p ˆ˘
= E Í4 cos Á w t + q - ˜ cos Á w t + wt + q - ˜ ˙
Î Ë 2 ¯ Ë 2¯˚
= 2 cos wt
9
RXX(0) = and RYY(0) = 2
2
È Ê p ˆ˘
RXY(t) = E Í3cos(w t + q ) 2 cos Á w t + wt + q - ˜ ˙
Î Ë 2¯˚
Ï ¸
6Ô È p˘ È p ˘Ô
= Ì E cos Í2w t + wt + 2q - ˙ + E Ícos(wt - )˙ ˝
2Ô Î 2˚ Î 2 ˚Ô
Ó zero ˛
= 3 sin wt
The maximum value of sin wt = 1.
Hence, RXY(t) has peak value as 1.
Random Processes—Spectral Characteristics 7.61
7.58 Given that a process X(t) has the auto-correlation function RXX(t) = Ae–a|t| cos(w0t) where A > 0,
a > 0 and w0 are real constants, find the power spectrum of X(t).
Ae - a |t | È jw 0t
= Îe + e - jw 0t ˘˚
2
•
SXX(w) = F[RXX(t)] = Ú RXX (t ) e - jwt dt
-•
2a
We know F[e–a|t|] =
a + w2
2
2a
F ÈÎe jw 0t e - a |t | ˘˚ =
a 2 + (w - w 0 )2
and
2a
F ÈÎe - jw 0t e - a |t | ˘˚ = 2
a + (w + w 0 )2
A È 2a 2a ˘
fi SXX(w) = Í 2 + 2 2˙
2 Î a + (w - w 0 ) 2
a + (w + w 0 ) ˚
7.59 An ergodic random process is known to have an auto-correlation function of the form
1 - | t |, | t | £| 1
RXX(t) =
0 |t | >1
Show that the spectral density is given by
2
Ê wˆ
sin
Á 2˜
SXX(w) = Á
w ˜
Á ˜
Ë 2 ¯
1
- jwt
= Ú (1 - | t |)e dt
-1
0 1
- jwt
= Ú (1 + t )e dt + Ú (1 - t ) e - jwt dt
-1 0
0 0 1 1
e - jwt e - jwt e - jwt e - jwt
= (1 + t ) - + (1 - t ) - (- 1)
- jw -1 -w2 - jw -w2
-1 0 0
jw - jw
1 1 e 1 e 1
= - + - + - 2 + 2
jw w 2 w 2 jw w w
=
2
2
-
1
2
(e jw + e- jw )
w w
2 2 2 2 È 2 w˘
= - cosw = [1 - cos w ] = Í2 sin 2 ˙˚
w 2
w 2 w2 w2 Î
2
w w Ê wˆ
4 sin 2 sin 2 sin
2 = 2 Á 2˜
= = Á
w2 w2 w ˜
Á ˜
4 Ë 2 ¯
Practice Problem
Solved Problems
2
-t /b 2
7.60 The autocorrelation function of a WSS random process is RXX(t) = a e . Find the power
spectral density and average power.
2
/b2
Solution RXX(t) = a e -t
• •
- t 2 /b2 - jwt
SXX(w) = Ú RXX (t )e - jwt dt = Ú ae e dt
-• -•
Êt2 ˆ Êt2 w 2b 2 ˆ
• - Á 2 + jwt ˜ w 2 b2 • - Á 2 + jwt -
Ëb ¯ - ˜
= a Úe dt = a e 4
Úe
Ëb 4 ¯
dt
-• -•
Random Processes—Spectral Characteristics 7.63
2
w 2 b2 • Ê t jw b ˆ
- -Á + ˜
Ëb 2 ¯
= ae 4
Úe dt
-•
μ
t jw b - z2
Let
b
+
2
=z fi dt = bdz Úe dz = p
-μ
•
2 2
- z2
fi SXX(w) = a b e -w b /4
Úe dz
-•
2 2
p a b e -w b /4
=
Average power
2
-t / b2
RXX(0) = a e t =0 = a watts
Practice Problems
2
/2s 2
7.26 The autocorrelation function of a periodic random process is RXX(t) = e -t . Find PSD and average power of
2
w 2 /2
the signal. ( Ans : s 2p e -s )
2
-at
7.27 Find the power density spectrum corresponding to the autocorrelation function (a) RXX(t) = e cos bt
(b) RXX(t) = 1 + e–2|t|.
Ê 1 È p - (w - b )2 / 4a ) p - (w + b )2 / 4a ) ˘ˆ
Á (a) Í e + e ˙˜
Á Ans : 2 ÍÎ a a ˙˚˜
Á 4 ˜
Á (b) 2pd (w ) + 2 ˜
Ë w +4 ¯
Solved Problems
T
1 È AB ˘
= lim
T Æ• 2T Ú ÍÎ 2
sin w 0t + cos(2w 0 t + w 0t )˙ dt
˚
-T
7.64 Probability Theory and Random Processes
T
AB 1
= sin w 0t + lim sin(2w 0 t + w 0 t )
2 T Æ • 4T -T
AB
= sin w 0t
2
The cross-power spectrum is
È AB ˘
SXY(w) = F Í sin w 0t ˙
Î 2 ˚
AB AB 1 È jw 0t
= F[sin w 0t ] = F Îe - e - jw 0t ˘˚
2 2 2j
F [e jw 0t ] = 2pd(w – w0)
- jAB
fi SXY(w) = 2p ÎÈd (w - w 0 ) + d (w + w 0 )˚˘
4
- jp AB
=
2 ÎÈd (w - w 0 ) + d (w + w 0 )˚˘
7.62 Consider a random process Y(t) = X(t – T) where T is the delay. If X(t) is a WSS random process
find RYY(t), SYY(w), RYX(t) and SYX(w).
Solution Given:
Y(t) = X(t – T)
We know RYY(t) = E[Y(t)Y(t + t)]
=
= RXX(t2 – t1)
Since X(t) is WSS,
RYY(t) = RXX(t)
SYY(w) = F[RYY(t)] = F[RXX(t)] = SXX(w)
RYX(t) = E[Y(t)X(t + t)]
=
= RXX(t + T)
Using the time shifting property of Fourier transform,
SYX(w) = ejwT SXX(w)
Random Processes—Spectral Characteristics 7.65
Practice Problem
7.28 Let Y(t) = X(t) + X(t – T). If X(t) is a WSS random process, find RXY(t), SXY(w).
Solved Problems
7.63 Find the cross-spectral density function of two random processes, X(t) = A cos (w0t) + B sin (w0t)
and Y(t) = – A sin(w0t) + B cos (w0t). The mean values mX = mY = 0.
The variance of these processes are s2.
= E{[A cos (w0t) + B sin (w0t)] [– A sin (w0t + w0t) + B cos (w0t + w0t)]
= E[–A2 cos (w0t) sin(w0t + w0t)] – E [AB sin w0t sin(w0t + w0t]
+ E[AB cos(w0t)cos(w0t + w0t)] + E[B2 sin w0t cos w0t + w0t]
E[AB] = E[A] E[B] = 0
and E[A2] = E[B2] = s2
fi RXY (t, t + t) = s2 (sin (–w0t)) = –s2(sin w0t)) = –s2(sinw0t)
SXY(w) = –js2p [d (w + w0) – d(w – w0)]
7.64 Find the average power of the random process X(t) with power density spectrum
6w 2
SXX(w) =
(1 + w 2 )3
Solution The average power
•
1
PXX =
2p Ú SXX (w )dw
-•
•
1 6w 2
=
2p Ú (1 + w 2 )3
dw
-•
p p
Let w = tan q then dw = sec2 q dq. Also, the limits of integration changes from - to . Therefore, we
can write 2 2
p p
2 2 2 2
1 6 tan q sec q dq 6
PXX =
2p Ú (1 + tan 2 q )3
=
2p Ú sin
2
q cos2 q dq
p p
- -
2 2
p p
2 2
6 Ê sin 2 q ˆ 3 2
Ê 1 - cos 4 q ˆ 3
=
2p Ú ÁË
2 ¯
˜ =
4p Ú ÁË 2
˜¯ dq =
8
p p
- -
2 2
7.66 Probability Theory and Random Processes
3
fi Average power PXX =
8
w2
7.65 A WSS random process X(t) has PSD SXX(w) = 4 . Find ACF and mean square value
of the process. w + 10w 2 + 9
Solution Given:
w2
SXX(w) =
w 4 + 10w 2 + 9
–1
The ACF = F [SXX(w)]
È w2 ˘ È w2 ˘
= F -1 Í 4 ˙ = F -1
Í ˙
ÎÍ w + 10w + 9 ˚˙ ÍÎ (w + 9)(w + 1) ˚˙
2 2 2
-1/8 9/8
SXX(w) = +
w +1
2
w2 + 9
-1 È -1/8 9/8 ˘
RXX(t) = F–1[SXX(w)] = F Í 2 + 2 ˙
Î w + 1 w + 9˚
- a|t | 2a
We know F [ e ]=
a + w2
2
1 -1 È 2(1) ˘ 9 È 2(3) ˘
RXX(t) = - F Í 2 ˙ + F -1 Í 2 ˙
16 Î w + 1 ˚ 8(6) Îw + 9 ˚
1 -|t | 3 -|t |
= -
e + e
16 16
The mean square value is
1 3 1
E[X2(t)] = RXX(0) = - + = watts
16 16 8
7.66 For a random process X(t) = A cos w0t where w0 is a constant and A is uniformly distributed with
mean 5 and variance 2, find the average power of X(t).
Solution Given X(t) = A cos w0t; A ~ U(5, 2)
E[A] = 5; sA2 = 2
RXX(t) = E[X(t)X(t + t)]
= E[A2cos w0t cos w0(t + t)]
= E[A2] cos w0t cos w0(t + t)
We know E[A2] = sA2 + {E[A]}2
= 2 + (5)2 = 27
Random Processes—Spectral Characteristics 7.67
Ï ¸
Ô T T Ô
27 Ô 1 1 Ô
= Ì lim
4 ÔT Æ • T Ú dt + Tlim
Æ• T
Ú cos 2w 0 t dt ˝
-T -T Ô
ÔÓ zero
Ô˛
27 È1 ˘ 27
= ÍÎ T (2T )˙˚ = 2 = 13.5 watts
4
7.67 Find the average power of the WSS random process X(t) has power spectral density
w 2 - 17
SXX(w) = .
(w + 49)(w 2 + 16)
2
Solution Given:
w 2 - 17
SXX(w) =
(w 2 + 49)(w 2 + 16)
A B
= +
w + 49
2
w + 16
2
(w 2 - 17)
A = (w 2 + 49)
(w 2 + 49)(w 2 + 16) w 2 = - 49
- 66
= =2
- 33
(w 2 - 17)
B = (w + 16)
2
(w 2 + 49)(w 2 + 16) w 2 = -16
7.68 Probability Theory and Random Processes
- 33
= = –1
33
2 1
fi SXX(w) = -
w + 49
2
w + 16
2
2 2(7) 1 2(4)
= -
14 w 2 + 49 8 w 2 + 16
2 -1 È 2(7) ˘ 1 -1 È 2(4) ˘
RXX(t) = F Í 2 ˙- F Í 2 ˙
14 Î w + 49 ˚ 8 Î w + 16 ˚
2 -7|t | 1 -4|t |
= e - e
14 8
2 1 8-7 1
PXX = RXX(0) = - = = watts
14 8 56 56
7.68 Find the average power of the random process X(t) which has the power spectral density
Ï w
Ô1 - | w | £ 4p
SXX(w) = Ì 4p
Ô 0 otherwise
Ó
Solution Given:
Ï w
1- | w | £ 4p
SXX(w) = ÔÌ 4p
Ô 0 otherwise
Ó
The average power
•
1
PXX =
2p Ú SXX (w ) dw
-•
4p
1 Ê w ˆ
=
2p ÚÁË 1 -
4
˜ dw
p¯
- 4p
Ï 4p ¸
1 Ô 4p 1 w2 Ô
= Ìw |- 4p - ˝
2p ÔÓ 4p 2 - 4p Ô
˛
1 Ï 1 ¸
= Ì8p - (16p 2 - 16p 2 ) ˝
2p Ó 4p ˛
= 4 watts
7.69 If RXX(t) = e–2l|t| is the auto-correlation function of a random process X(t), obtain the spectral
density.
Random Processes—Spectral Characteristics 7.69
1 1 4l
= + =
2 l - jw 2 l + jw 4l + w 2
2
1
7.70 Given the power spectral density SXX(w) = , find the average power of the process.
4 + w2
•
1 1
=
2p Ú 4 + w2
dw
-•
È1 • ˘ 1 1 x
1
Í tan -1
w
˙ Ú x2 + a2 dx = tan -1
= a a
2p ÎÍ 2 2 ˙
-• ˚
1
= [tan -1 (•) - tan -1 (- •)]
4p
1 Èp Ê - p ˆ ˘ 1
= Í -Á ˜˙ =
4p Î 2 Ë 2 ¯ ˚ 4
7.71 Find the power spectral density of the random process X(t) if E[X(t)] = 1 and RXX(t) = 1 + e–a|t|.
•
- a|t |
= Ú (1 + e )e - jwt dt
-•
7.70 Probability Theory and Random Processes
• •
= Ú e - jwt dt + Úe
-|t | - jwt
e dt
-• -•
• 0 •
- a|t |
Úe e - jwt dt = Úe
at
e - jwt dt + Úe
- at - jwt
e dt
-• -• -0
1 1 2a
= + =
a - jw a + jw a 2 + w 2
Consider inverse Fourier transform of d(w)
•
1 1
Ú d (w )e
jwt
F–1[d(w)] = dw =
2p -•
2p
È 1 ˘
fi F Í ˙ = d(w) or F[1] = 2p d(w)
Î 2p ˚
• •
- jwt
F[1] = Ú 1◊ e - jwt dt = Úe = 2pd(w)
-• -•
2a
fi SXX(w) = 2p d (w ) +
a + w2
2
7.72 If X(t) is a WSS process with auto-correlation function RXX(t) and if Y(t) = X(t + a) – X(t – a), prove
that SXY(w) = 4 SXX(w) sin2(aw).
Solution Given:
Y(t) = X(t + a) – X(t – a)
RYY(t) = E[Y(t)Y(t + t)]
= E{[X(t + a) – X(t – a)] [X(t + a + t) – X(t – a + t)]}
= E[X(t + a) X (t + a + t)] – E[X(t – a) X (t + a + t)]
– E[X(t + a)X(t – a + t)] + E[X(t – a)X(t – a + t)]
= RXX(t) – RXX(2a + t) – RXX(t – 2a) + RXX(t)
= 2 RXX(t) – RXX(t + 2a) – RXX(t – 2a)
Taking Fourier transform on both sides, we get
SYY(w) = 2SXX(w) – F[RXX(t + 2a)] – F[RXX(t – 2a)]
Using time-shifting property of Fourier transforms, we can write
F[RXX(t + 2a)] = ejw2a SXX(w)
F[RXX(t – 2a)] = e–jw2a SXX(w)
fi SYY(w) = 2SXX(w) – SXX(w)[ej2aw + e–j2aw]
= 2SXX(w) – 2SXX(w) cos 2wa
= 2SXX(w) [1 – cos 2 wa]
= 4 SXX(w) sin2(aw)
Random Processes—Spectral Characteristics 7.71
7.73 Find the auto-correlation function of the process X(t) for which the PSD is given by
SXX(w) = 1 + w2 for | w | < 1 and SXX(w) = 0 for | w | > 1.
Solution Given:
SXX(w) = 1 + w2 for | w | < 1
= 0 otherwise
1 È ˘
• 1
1
RXX(t) =
2p Ú SXX (w )e jwt dw = Í Ú (1 + w 2 )e jwt dw ˙
2p Í -1 ˙˚
-• Î
È 1 1 1 1 ˘
1 Í e jwt w 2 jwt 2w 2 ˙
= + e + e jwt - e jwt
2p Í jt -1
jt -1 t2 -1 jt 3 ˙
Î -1 ˚
1 È 2 sin t 2 sin t 2 2 ˘
= Í + + 2 (2 cos t ) - 3 (2 sin t )˙
2p Î t t t t ˚
2
= [t 2 sin t + t cos t - sin t ]
pt 3
7.74 Find the mean sanane value of the process whose power spectral density is
1
SXX(w) = .
w + 10w 2 + 9
4
Solution Given:
1 1
SXX(w) = =
w + 10w + 9
4 2
(w + 9)(w 2 + 1)
2
A B
= +
w2 + 9 w2 + 1
1 1 1
= - +
8 w 2 + 9 8(w 2 + 1)
1 Ê 1ˆ 6 1 Ê 1ˆ 2
SXX(w) = - ÁË ˜¯ 2 + Á ˜◊
8 6 w + 9 8 Ë 2¯ w2 + 1
1 Ê 1ˆ 1 Ê 1ˆ
RXX(t) = - Á ˜ e -3|t | + Á ˜ e - 2|t |
8 Ë 6¯ 8 Ë 2¯
The mean square value
1 1 -1 + 3 1
RXX(0) = - + = =
48 16 48 24
7.72 Probability Theory and Random Processes
7.75 The auto-correlation function for a stationary process X(t) is given by RXX(t) = 9 + 2e–|t|. Find the
2
mean of the random variable Y = Ú X (t ) dt and variance of X(t).
0
Solution Given:
RXX(t) = 9 + 2e–|t |
mX2 = {E[X(t)]}2 = lim RXX (t )
t Æ•
-2|t |
= lim [9 + e ] =9
t Æ•
fi mX = 3
E[X (t)] = RXX(0) = [9 + 2e–|t|]t = 0 = 11
2
7.29 Given that the autocorrelation function for a stationary ergodic process with no periodic components is RXX(t) =
4
25 + . Find the mean and variance of the process.
1 + 6t 2
(Ans. mX = 5, Var(X) = 4)
7.30 Find the mean and variance of a stationary process whose autocorrelation function is given by RXX(t) =
2
18 +
6 + t2
mX2 = lim RXX(t) = 18 fi mX = 3 2
t Æ•
55
E[X2(t)] = RXX(0) =
3
1
Var[X(t)] =
3
25t 2 + 36
7.31 A stationary random process has an autocorrelation function and is given by RXX(t) = . Find the mean
and variance of the process. 6.25t 2 + 4
25
mX2 = lim RXX(t) = = 4; E[X2(t)] = RXX(0) = 9
t Æ• 6.25
Var (x) = 9 – 4 = 5
Random Processes—Spectral Characteristics 7.73
Solved Problem
7.76 Find the average power of a random process X(t) = A cos (w0t + q) if
(a) w0 ~ U (50, 100) and nothing else in X(t) is random.
(b) A ~ N (5, 1) and nothing else in X(t) is random.
(c) q ~ U (0, 2p) and nothing else in X(t) is random.
Solution
(a) Given X(t) = A cos (w0t + q) and w0 ~ U (50, 100)
1 1
f(w0) = = for 50 £ w0 £ 100
100 - 50 50
=0 otherwise
RXX(t, t + t) = E[X(t) X (t + t)]
= E{[A cos(w0t + q)] [A cos (w0t + w0t + q)]}
A2
= È E[cos w 0t + cos(2w 0 t + w 0t + 2q )]
2 Î
•
A2
=
2 Ú [cos w 0t + cos(2w 0 t + w 0t + 2q )] f (w 0 ) dw 0
-•
100
A2
=
100 Ú [cos w 0t + cos(2w 0 t + w 0t + 2q )] dw 0
50
A2 È sin w t 100
sin(2w 0 t + w 0t + 2q )
100 ˘
= Í 0
+ ˙
100 ÍÎ t 50 2t + t 50 ˙˚
A2 È sin100 t - sin 50 t sin[100(2t + t ) + 2q ] - sin[50(2t + t ) + 2q ] ˘
+
100 ÍÎ ˙
=
t 2t + t ˚
E[X2(t)] = lim RXX(t, t + t)
t Æ0
T
1 A2 È sin(200t + 2q ) - sin(100t + 2q ) ˘
= lim
T Æ• 2T Ú 100 ÍÎ50 + 2t ˙˚ dt
-T
7.74 Probability Theory and Random Processes
A2 A2
= (50) =
100 2
(b) RXX(t, t + t) = E{[A cos (w0t + q)] [A cos (w0t + w0t + q)]}
= E [A2] cos(w0t + q) cos (w0t + w0t + q)
E[A2] = sA2 + {E[A]}2
Given: A ~ N (5, 1)
fi E[A] = 5; sA = 1
E[A2] = sA2 + {E[A]}2 = 1 + 52 = 26
RXX(t, t + t) = 26 cos (w0t + q) cos (w0t + w0t + q)
E[X2(t)] = RXX(t, t) = 26 cos2(w0t + q)
T T
1 1
Ú Ú 26 cos (w 0 t + q ) dt
2
PXX = lim E[ X 2 (t )] dt = lim
T Æ• 2T T Æμ 2T
-T -T
T
1 26
= lim
T Æμ 2T Ú 2
[1 + cos 2(w 0 t + q )] dq
-T
26
= = 13
2
(c) RXX(t, t + t) = E{[A cos (w0t + q)] [A cos (w0t + w0t + q)]
q ~ U(0, 2p)
Ï 1
Ô for 0 £ q £ 2p
fi f(q) = Ì 2p
Ô 0 otherwise
Ó
A2
RXX(t, t + t) = {E[cos w 0t + cos(2 w 0 t + w 0t + 2q )]}
2
•
A2
=
2 Ú [cos w 0 t + cos(2 w 0 t + w 0t + 2q )] f (q ) dq
-•
2p
A2
=
2 (2p ) Ú [cos w 0t + cos(2 w 0 t + w 0t + 2q )] dq
0
A È ˘
2 2p 2p
= Í Ú cos w 0t dq + Ú cos(2w 0t + w 0t + 2q ) dq ˙˙
4p Í 0 0
ÍÎ Zero
˙˚
A2
= 2p cos w0t
4p
Random Processes—Spectral Characteristics 7.75
A2
= cos w0t
2
The average power
A2 A2
PXX = cos w 0 t =
2 t =0 2
7.77 A random process X(t) has only a dc component A. A is a random variable with zero mean and unit
variance. Find the auto-correlation function and power spectrum.
7.78 Given a stationary random process X(t) and its auto-correlation function RXX(t) and power spectrum
SXX(w). Find the auto-correlation and power spectrum of Y(t) = a X(t), where a is a constant.
Solution Given:
Y(t) = a X(t)
RYY(t) = E[Y(t) Y(t + t)]
= E[a X (t) a X(t + t)]
= E[a2 X (t)X(t + t)]
= a2E[X(t) X (t + t)]
= a2 RXX(t)
If a is negative,
RYY(t) = E[(–a x(t)) (– a x(t + t))]
= E[a2x(t)x(t + t)
= a2 RXX(t)
Hence, for any a
RYY(t) = |a |2 RXX(t)
The power spectrum
SYY(w) = F[RYY(t)]
= |a |2SXX(w)
7.79 If SXX(w) and SYY(w) are power density spectra of two random process X(t) and Y(t) respectively,
under what condition the power density spectrum of X(t) + Y(t) is equal to SXX(w) + SYY(w).
7.76 Probability Theory and Random Processes
7.80 Let X(t) and Y(t) be independent wide-sense stationary random processes. Let Z(t) = X(t) Y(t).
(a) Prove that Z(t) is wide-sense stationary.
(b) Find RZZ(t) and SZZ(w).
Solution Given: X(t) and Y(t) are independent wide-sense stationary processes. Therefore,
RXX(t, t + t) = RXX(t)
and RYY(t, t + t) = RYY(t)
Also, Z(t) = X(t) Y(t)
The mean of Z(t) is
E[Z(t)] = E[X(t) Y(t)]
Since X(t) and Y(t) are independent,
mz = E[X(t) E[Y(t)]
= mX mY
The autocorrelation function
RZZ(t) = E[Z(t)Z(t + t)]
= E[X(t)Y(t)X(t + t)Y(t + t)]
= E[X(t)X(t + t)] E[Y(t)Y(t + t)]
= RXX(t) RYY(t)
The PSD of Z(t) is
F[RZZ(t)] = F[RXX(t) RYY(t)]
Using convolution property of Fourier transforms, we can write
SZZ(w) = SXX(w) * SYY(w)
— —
7.81 If X(t) and Y(t) are uncorrelated and have constant means X and Y then
——
SXY(w) = SYX(w) = 2p X Y d(w).
Random Processes—Spectral Characteristics 7.77
7.82 The cross-correlation function of the processes X(t) and Y(t) is given by
AB
RXY(t, t + t) = [sin(w 0T ) + cos w 0 (2t + t )]
2
where A and B are constants. Find SXY(w).
Solution
SXY(w) = F{A[ RXY (t , t + t )]}
ÏÔ 1
T ¸Ô
= F Ì lim Ú RXY (t, t + t ) dt ˝
ÔÓT Æ• T -T Ô˛
ÔÏ ˘ Ô¸
T
1 È AB
= F Ì lim
T Æ• 2T
Ú ÍÎ 2 sin w 0t + cos w 0 (2t + t )˙˚ dt ˝
ÔÓ -T Ô˛
Ï AB Ê 2T ˆ
T ¸Ô
= F ÔÌ
1
sin w 0t Á ˜ + lim
Ë 2T ¯ T Æ• 2T -TÚ cos w 0 (2t + t ) dt ˝
ÓÔ 2 ˛Ô
˘ AB È ˘
T
È AB 1
= FÍ
Î 2
sin w 0t ˙ +
˚ 2
F Í lim
ÍÎ T Æ• 2T Ú cos (w 0 (2t + t ) dt ˙˙
-T ˚
È T ˘
AB AB Í 1 Ê sin w 0 (2t + t ) ˆ ˙
= F[sin w 0t ] + F lim
2 2 ÍT Æ• 2T ÁË w0 ˜¯ ˙
Î -T ˚
AB
= {- jp [d (w - w 0 ) - d (w + w 0 ]}
2
- jp AB
= [d (w - w 0 ) - d (w + w 0 ]
2
7.83 The power spectral density of zero mean process X(t) is given by
Ï1 w £ w 0
SXX(w) = Ì
ÔÓ0 elsewhere
Ê p ˆ
Find RXX(t) and prove that X(t) and X Á t + are uncorrelated.
Ë w 0 ˜¯
Solution
RXX(t) = F –1 [SXX(w)]
• w0
1 1
=
2p Ú SXX /(w ) e jwt dw =
2p Ú e jwt dw
-• -w 0
w
1 1 Ê e jwt - e - jw 0t ˆ
= e jwt = Á ˜
2p jt -w 0
pt Ë 2j ¯
sin w 0 t
=
pt
Since X(t) is a zero-mean process,
È Ê p ˆ˘ È Ê p ˆ˘ Ê p ˆ
Cov Í X (t ) X Á t + ˜ ˙ = E Í X (t ) X Á t + ˜ ˙ = RXX Á ˜
ÍÎ Ë w 0 ¯ ˙˚ ÍÎ Ë w 0 ¯ ˙˚ Ë w0 ¯
Ê p ˆ
sin w 0 Á ˜
Ë w0 ¯
= =0
Ê p ˆ
pÁ ˜
Ë w0 ¯
Ê p ˆ
Therefore, X(t) and X Á t + are uncorrelated.
Ë w 0 ˜¯
Ï T ¸ T
-2 A Ô È t e - jwt 1 ˘ 2 Ô A e - jwt 2
- e - jwt ˙ ˝+
+ t Ì ÍÍ - jw 2 - jw
ÔÎ ( jw ) ˚˙ 0 Ô 0
Ó ˛
2A È e jwT /2 e - jwT /2 2 ˘
= Í 2
+ 2
- ˙
t ÍÎ ( jw ) ( jw ) ( jw )2 ˙˚
2
2 A Ê e jwT /4 - e - jwT /4 ˆ 8A Ê wT ˆ
= Á ˜ = sin 2 Á
t Ë jw ) ¯ Tw 2 Ë 4 ˜¯
2
wT Ê wT ˆ
2 sin 2 sin
2t 8 4 AT Á 4 ˜
= ◊ =
16 T Ê w T ˆ 2 2 ÁÁ w T ˜
˜
ÁË 4 ˜¯ Ë 4 ¯
AT Ê wT ˆ
= sin c 2 Á
2 Ë 4 ˜¯
2a
We have F[e–a|t|] =
a +w22
1È 2a 2a ˘
F[e–a|t| cos (w0t)] = Í + ˙
2 ÎÍ a 2 + (w + w 0 )2 a 2 + (w - w 0 )2 ˚˙
a a
fi SXX(w) = +
a + (w + w 0 )
2 2
a + (w - w 0 )2
2
REVIEW QUESTIONS
22. State any two properties of cross-correlation function.
23. Define bandlimited white noise.
24. Find the mean of the stationary process X(t), whose autocorrelation function is given by
9
RXX(t) = 16 +
1 + 6t 2
25. Find the power spectral density function of the stationary process whose autocorrelation function is
given by e–|t|.
26. What is autocorrelation function of the white noise?
-t ,-t
27. If X(t) is a normal process with uX(t) = 10 and CXX(t1, t2) = 16 e 1 2 , find the variance of X(10) –
X(6).
28. The autocorrelation function of a stationary random process is
4
RXX(t) = 25 + . Find the mean and variance of the process.
1 + 6t 2
29. Explain power spectral density function. State its important properties.
1
30. Given the power spectral density of a stationary process as SXX(w) = , find the average
power of the process. 4 + w2
31. State and prove Wiener–Khintchin relations.
32. State and prove any three properties of cross-power density spectrum.
33. Derive the relation between cross-power spectrum and cross-correlation function.
EXERCISES
Problems
1. Find the autocorrelation function of the periodic time function X(t) = A sin wt.
t
2. The autocorrelation function of the random binary transmission X(t) is given by RXX(t) = 1 - for
T
|t| < T and RXX(t) = 0 for |t| < T. Find the power spectrum of the process X(t).
Random Processes—Spectral Characteristics 7.81
3. X(t) and Y(t) are zero-mean and stochastically independent random processes having autocorrelation
functions RXX(t) = e–|t| and RYY(t) = cos 2pt respectively. Find (a) the autocorrelation function of
W(t) = X(t) + Y(t) and Z(t) = X(t) – Y(t), (b) The cross-correlation function of W(t) and Z(t).
4. Find the ACF of the process X(t) for which the PDS is given by SXX(w) = 1 + w2 for |w| < 1 and
SXX(w) = 0 for |w | > 1.
5. Find the variance of the stationary process X(t) whose autocorrelation function is given by RXX(t) =
2 + 4e–2|t|.
6. If X(t) and Y(t) are two random process with autocorrelation function RXX(t) and RYY(t) respectively
then prove that RXY (t ) £ RXX (0) RYY (0) .
7. Find the power spectral density of the random process whose autocorrelation function is
Ï1 - t for t £ 1
RXX(t) = Ì
ÔÓ0 elsewhere
8. The ACF of a random process is given by
Ïl 2 it >e
Ô
RXX(t) = ÔÌ l Ê t ˆ
Ôl + Á 1 - ˜ ; t £ e
2
ÔÓ e Ë e¯
Find the power spectral density of the process.
9. Given the power spectral density of a continuous process as
w2 + 9
SXX(w) =
w 4 + 5w 2 + 4
find the mean square value of the process.
10. The cross-power spectrum of real random processes X(t) and Y(t) is given by
Ïa + jbw , for w < 1
SXY(w) = Ì
Ó 0 elsewhere
Find the cross-correlation function.
dX (t )
11. If the PSD of X(t) is SXX(w), find the PSD of .
dt
12. Prove that SXX(w) = SXX(–w).
13. If RYY(t) = ae–b| t |, find the spectral density function,
where a and b are constants.
A2
14. Given RXX(t) = 0 sin w 0 t . Find SXX(w).
2
15. Let X(t) be a stationary continuous random process that is differentiable. Denote its time derivative
X (t ) show that E[ X (t )] = 0
Find F[ RXX (t )] in terms of SXX(w).
16. Find the ACF of the following PSDs.
157 + 2w 2
SXX(w) =
(16 + w 2 ) (9 + w 2 )
7.82 Probability Theory and Random Processes
8
SXX/w) =
(9 + w 2 )2
17. Find the mean square value of the process whose power spectral density is
1
SXX(w) = (Ans. 1/24)
w 4 + 10w 2 + 9
2w 2 + 6
18. Check whether SXX(w) = can represent a valid power spectral density function.
8w + 3w 2 + 4
2
(Ans. A valid PSD)
19. Find the mean square value of the process whose power density spectrum is
w2 + 2
SXX(w) = (Ans. 1/2)
w + 5w 2 - 36
4
20. Find the power spectral density for the stationary process X(t) with auto©correlation function
RXX(t) = s2 e–a|t|
w2 + 1
21. The power spectral density of a stationary random process is SXX(w) = 4 . Find the
autocorrelation function and average power of the process. w + 4w 2 + 4
1 2t -t
(Ans. RXX(t) = e - t e ; RXX(0) = 1/2)
2
22. If the power spectral density of a WSS process is given by
Ïb
Ô Èa - w ˘˚ ; w £ a
SXX(w) = Ì a Î
Ô 0 ; w >a
Ó
find the autocorrelation of the process.
23. A stationary random process X(t) has an ACF given by
-2 t -3 t
RXX(t) = 2e + 3e
Find the power spectral density of the process.
24. A zero-mean wide-sense stationary random process X(t), –• < t < •, has the following power
spectral density
2
SXX(w) = -• <w < •
1 + w2
The random process Y(t) is defined by
Y(t) = X(t) + X(t – 2)
Find the mean and variance of Y(t).
25. Two jointly stationary random processes X(t) and Y(t) have the cross-correlation function given by
RXY(t) = 16e– 4t t ≥ 0.
Find SXY(w) and SYX(w).
26. A random process has a power spectral density function given by
1
SXX(w) =
3
È Ê w ˆ2˘
Í1 + Á ˜ ˙
Ë B¯ ˙
ÎÍ ˚
Find the bandwidth.
Random Processes—Spectral Characteristics 7.83
29. The random process X(t) = A0 sin (w0t + q) where A0 and w0 are constants and q is a random variable
uniformly distributed on the intervals (0, p). (a) Is X(t) is wide-sense stationary. (b) Find the power
density spectrum.
30. Find the rms bandwidth of the power spectrum
Ï Ê pw ˆ
Ô A cos ÁË ; w £W
SXX(w) = Ì 2W ˜¯
Ô0 w >W
Ó
where W > 0 and A > 0 are constants.
31. The autocorrelation function of a random process X(t) is
RXX(t) = 1 + 4 exp (–2t2)
(a) Find the power spectrum of X(t)
(b) What is the average power in X(t)?
32. State whether or not each of the following functions can be a valid power-density spectrum.
2
w e -w w4 2 -w 2
(a) (b) (c) cos (w ) e (d) e–2sin (2w)
1 + jw (8 + w ) 2 4
Multiple-Choice Questions
1. For a signal x(t), Fourier transform exists if it satisfies the condition
• • • •
Ú x (t ) dt < •
2 2
(a) Ú x(t ) dt < • (b) Ú x(t ) dt < • (c) Ú x(t ) dt < • (d)
-•
-• -• -•
2. The power density spectrum satisfies the following condition:
(a) SXX(w) = –SXX(–w) (b) SXX(w) = SXX(–w)
(c) SXX(w) = • at w = 0 (d) SXX(w) = SXX(w2)
3. SẊẊ (w) =
(a) w SXX(w) (b) –w2SXX(w) (c) w2SXX(w) (b) w2SẊẊ (w)
4. If SXX(w) is the power density spectrum of X(t) then the power density spectrum of a X(t) is
(a) aSXX(w) (b) a2SXX(w) (c) –aSXX(w) (d) |a|2SXX(w)
5. Which of the following power density spectrum is/are valid?
2
w 1 w e -8w w +2
(a) (b) (c) (d)
w + 16
2
3w + 9w
4 2
1 + j 2w w4 + w2
4
6. The mean square value of the process whose power density spectrum is
4 + w2
1 1
(a) 1 (b) (c) (d) 2
2 4
7. The rms bandwidth of the process whose ACF is 2e–b|t| is
3 5
(a) 1 Hz Hz (b) (c) Hz (d) 2 Hz
4 4
8. The PSD of a random process whose RXX(t) = e–b|t| is
1 2b b b
(a) 2 (b) 2 (c) (d)
b +w 2
b +w 2 2(b + w 2 )
2
b + w2
2
12. The average power of the random process with power spectrum SXX(w) = 3w 2 + 4 is
2w + 6w 2 + 4
2
1 1 1 1 1 1 1 1
(a) + (b) + (c) + (d) +
4 2 2 2 2 2 2 2 4 2
Random Processes—Spectral Characteristics 7.85
13. The power spectral density of a stationary process whose autocorrelation function is e–|t| is
1 2 2 1
(a) (b) (c) (d)
1 + w2 1 + w2 (1 + w 2 )2 (1 + w 2 )2
Ï1 - t ; t £ 1
14. The power spectral density of the random process whose ACF is RXX(t) = Ì
ÔÓ0 elsewhere
2 2 2 w 2 w
(a) (1 - sin w ) (b) (1 - cos w ) (c) sin 2 (d) cos2
w2 w2 w2 2 w2 2
8
15. The cross-power density spectrum of X(t) and Y(t) is the SXY (w ) = cross-relation function
is (a + jw )3
(c) RXX (t ) = RXX (0) RYY (0) (d) none of the above
18. If X(t) and Y(t) are un-correlated and have constant means mX and mY then SXX(w) =
(a) XY mX mY (b) 2p mXmY (c) 2p mXmYd(w) (d) (mX + mY) d(w)
19. The rms bandwidth W 2rms in terms of power spectral density is
• • • •
Ú SXX (w ) dw Ú SXX (w ) dw Úw
2
SXX (w ) dw Ú w SXX (w ) dw
-• -• -• -•
22. The power spectral density of a WSS process is given by SXX(w) = 1; |W| < w0
The autocorrelation function is given by 0 otherwise
sin w 0t sin 2 w 0t sin 2 w 0t sin 2 w 0t /2
(a) (b) (c) (d)
pt pt p 2t 2 pt /2
23. The PSD of a wide-sense process is always
(a) finite (b) zero (c) negative (d) non-negative
24. The average power of a waveform X(t) = A cos (w0t + q) is
A A2 A2
(a) (b) (c) (d) A2
2 4 2
25. The autocorrelation function of white noise is given by
(a) RNN(t) = S0 (b) RNN(t) = S0d(t) (c) RNN(t) = S0 u(t) (d) RNN(t) = S02d(t)
INTRODUCTION 8.1
In the previous chapters, we studied about random signals and random processes. We know that a random
signal can be modelled as a sample function of a random process. Such random signals are described
mathematically using time-domain statistical measures like mean and autocorrelation. The frequency-domain
method used to describe a random signal includes power density spectrum which describes how the average
power is distributed across frequencies. Communication and control system engineers deal with noise which
is random. When a signal containing noise is applied to a communication system, it is necessary to know the
effect of noise on the performance of the system. Therefore, it is necessary to study the output of a system
for a random input. In this chapter, we will concentrate on this aspect. First a brief introduction to system
analysis is introduced in the following section.
for all values of T then the system is time-invariant. On the other hand, if the output
y(t, T) π y(t – T) (8.6)
then the system is time-variant.
To test the time invariance property of the given discrete-time system, first apply an arbitrary input
sequence x(n) and find y(n). Delay the input sequence by k samples and find the output sequence, denote it
as
y(n, k) = T[x(n – k)] (8.7)
Delay the output sequence by k samples denote it as y(n – k)
If y(n, k) = y(n – k) (8.8)
for all possible values of k then the system is time invariant, otherwise the system is time variant.
If the response of the system due to impulse d(t) is h(t) then the response of the system due to delayed
impulse is
h(t, T) = T[d(t – T)] (8.11b)
Substituting Eq. (8.11b) in Eq. (8.11a), we get
•
y(t) =
Ú x (t )h (t,t )dt
-•
(8.11c)
For a time-invariant system, the output due to delayed input by T seconds is equal to delayed output by T
seconds.
8.4 Probability Theory and Random Processes
This is called convolution integral, or simply convolution. The convolution of two signals x(t) and h(t)
can be represented as
y(t) = x(t) * h(t) (8.12b)
Properties of Convolution
Consider a system with impulse response h(t) and input x(t). Then the output of the system
•
y(t) = x(t) * h(t) =
Ú x(t )h(t - t )dt
-•
In the above equation, the output y(t) depends on past, present and future values of input. For t ≥ 0, the
output depends on present and past values of input. For t < 0, the output depends on future values of input.
But for a causal system, the output depends only on present and past values of input which implies that
h(t) = 0 for t < 0. (8.15)
An LTI continuous system is causal if and only if its impulse response is zero for negative values of t.
Stability
A system is said to be BIBO stable if it produces a bounded output for a bounded input.
Let us consider an input signal x(t) that has a bounded magnitude M. That is,
|x(t)| £ M < •
Then from convolution integral, the magnitude of the output can be expressed as
•
|y(t)| =
Ú h (t )x (t - t )dt
-•
Linear Systems with Random Inputs 8.5
•
£ Ú h(t )
-•
x(t - t ) dt
•
£M Ú h(t ) dt
-•
Ú h(t ) dt < •
-• (8.16)
Therefore, the system is stable if the impulse response is absolutely integrable.
REVIEW QUESTIONS
1. Define a system.
2. State superposition principle.
3. Explain in detail about the classification of systems.
4. Derive an expression for output of a system for any arbitrary input.
5. What is convolution integral?
6. State properties of convolution.
7. What is the condition on impulse response for (a) causality, and (b) stability?
Solved Problems
8.1 Determine which of the following systems are linear, time invariant, stable and realizable.
(a) y(t) = |x(t)| (b) y(t) = x(t/2) (c) y(t) = x(t) (d) y(t) = tx(t).
Solution
Linearity Check
(a) Given: y(t) = |x(t)|
y1(t) = |x1(t)| and y2(t) = |x2(t)|
The output due to weighted sum of inputs is
y3(t) = |a1x1(t) + b1x2(t)|
The weighted sum of outputs is
a1y1(t) + b1y2(t) = a1|x1(t)| + b1|x2(t)|
y3(t) π a1y1(t) + b1y2(t)
Hence, the system is nonlinear.
Time Invariant Check
Given: y(t) = |x(t)|
8.6 Probability Theory and Random Processes
y3 (t ) = e a1 x1 (t )+ b1 x2 (t )
Linear Systems with Random Inputs 8.7
a1 y1 (t ) + b1 y2 (t ) = a1e x1 (t ) + b1e x2 (t )
y3(t) π a1y1(t) + b1y2(t). Hence, the system is nonlinear.
Time Invariant Check
Given: y(t) = ex(t)
The output due to delayed input is
y(t, T) = ex(t – T)
The delayed output is y(t – T) = ex(t – T)
y(t, T) = y(t – T). Therefore, the system is time invariant.
Stability Check
Let the input x(t) be bounded satisfying |x(t)| £ Mx < •. Then the magnitude of the output |y(t)| = |ex(t)| < •.
Therefore, the system is stable.
Reliability Check
The output depends on present values of the input. Therefore, the system is realizable.
(ii) Given: y(t) = tx(t)
y1(t) = tx1(t) and y2(t) = tx2(t)
The output due to weighted sum of inputs is
y3(t) = a1tx1(t) + b1tx2(t)
The weighted sum of outputs is
a1y1(t) + b1y2(t) = a1tx1(t) + b1tx2(t)
y3(t) = a1y1(t) + b1y2(t). Hence, the system is linear.
Time Invariant Check
Given: y(t) = tx(t)
The output due to delayed input is
y(t, T) = tx(t – T)
The delayed output is y(t – T) = (t – T) x(t – T)
y(t, T) π y(t –T). Therefore, the system is time-variant.
Stability Check
Let the input x(t) be bounded satisfying |x(t)| £ Mx < •. Then the magnitude of the output |y(t)| = |tx(t)| tends
to infinity as t Æ •.
Therefore, the system is unstable.
Reliability Check
The output depends on present values of the input. Therefore, the system is realizable.
8.2 Find whether the following systems with the given impulse response are stable/realizable.
(a) h(t) = e–|t| (b) h(t) = tu(t) (c) h(t) = et cos (w0t)u(t).
8.8 Probability Theory and Random Processes
Solution
Given:
(a) h(t) = e–|t|
For the system to be stable, the impulse response must be absolutely summable.
•
That is,
Ú h(t ) dt < •.
-•
• • 0 • 0 •
Ú Ú Ú Ú Ú Ú
-t -t -t
h(t ) dt = e dt = e dt + e dt = et dt + e - t dt
-• -• -• 0 -• 0
0 •
= et - e- t =2<•
-• 0
Therefore, the system is unstable. The impulse response is equal to zero for t < 0. Therefore, the system
is realizable.
8.3 Determine which of the following impulse responses do not correspond to a system that is stable or
realizable or both and state why?
(a) h(t) = u(t + 3)
2
(b) h(t ) = u(t )e - t
(c) h(t) = et sin (w0t) w0 : real constant
(d) h(t) = e–3t sin (w0t) u(t) w0 : real constant
Solution A system is said to be stable if its impulse response is absolutely integrable. That is,
•
Ú h (t ) dt < •
-•
Similarly, a system is physically realizable if h(t) = 0 for t < 0. Using the above two conditions, we will
solve the given problem.
Linear Systems with Random Inputs 8.9
Ú h(t ) dt = Ú e
-t2
dt
-• 0
Ï È 1 • ˘ ¸Ô
1Ô
= Ì 2ps
2Ô
2 Í
Í 2ps 2 Úe -t2
dt ˙ ˝
˙Ô
Ó Î -• ˚˛
1
The above equation follows a Gaussian distribution with s = and m = 0.
2
•
1
Úe
-t2
Therefore, dt = 1. Hence
2
2ps -•
•
Ú h(t ) dt
p
=
-•
2
•
• • •
Ê 1 ˆ -et cos(w o t ) et sin (w o t )
Á 1 + 2˜
Ë w 0 ¯ -•
Ú
et sin (w o t ) dt =
wo
+
w 2
o
=•
-• -•
Hence, the system is unstable.
8.10 Probability Theory and Random Processes
Practice Problems
8.1 Test whether the following systems are linear/nonlinear, time-invariant/time variant, causal/non-causal, stable/
unstable Ans: (a) Linear, time invariant, causal, unstable
(b) Nonlinear, time-invariant, causal, stable
t (c) Nonlinear, time-invariant, causal, stable.
(a) y(t) = Ú x(t )dt (b) y(t) = log [x(t)] (c) y(t) = x2(t)
-•
8.2 Test whether the following systems are linear/nonlinear, time-invariant/time variant, causal/non-causal, stable/
unstable Ans: (a) Linear, time invariant, noncausal, stable
1 (b) Linear, time-invariant, noncasual stable
(a) y(n) = x(n + 1) + x(n – 1) (b) y(n) = x(n2)
2
FREQUENCY-DOMAIN
CHARACTERIZATION OF A SYSTEM 8.5
So far we studied the time-domain representation of a system. In this section, we will study the frequency
domain representation of a system using Fourier transform.
The Fourier transform of a signal x(t) is given by
•
Ú x(t )e
- jw t
X(w) = dt for all w (8.17)
-•
and inverse Fourier transform of X(w) is given by
•
1
Ú X (w ) e
jw t
x (t ) = dw for all t (8.18)
2p
-•
The Fourier transform of a signal x(t) exists if x(t) is absolutely integrable over (–•, •). That is,
•
Ú
-•
x(t ) dt < • (8.19)
Consider an LTI continuous-time system with impulse response h(t). If the input to the system is x(t) then
output y(t) of the system can be obtained using convolution integral. That is,
•
y(t) = Ú x(t )h(t - t ) dt
-•
•
=
Ú h(t ) x(t - t ) dt
-•
If x(t) is a complex exponential signal x(t) = ejwt then the output
•
Ú h(t ) e
jw ( t -t )
y(t) = dt
-•
Linear Systems with Random Inputs 8.11
Ú h(t ) e
- jwt
= e jw t dt
-•
Ú h(t )e
where H(w) = - jwt (8.21)
dt
-•
Ú x(t ) [H (w )]e
- jwt
= dt
-•
•
Ú x(t )e
- jwt
= H (w ) dt
-•
Solved Problem
8.4 Find the transfer function of the RC network shown in Fig. 8.3.
Fig. 8.3
8.12 Probability Theory and Random Processes
If x(t) = ejwt then the output y(t) = H(w) x(t) = H(w) ejwt (8.26)
The differentiation of the output.
dy(t )
= jw H (w )e jw t = jw H (w ) x(t ) (8.27)
dt
Substituting Eq. (8.26) in Eq. (8.25) we get
i(t) = jwCH(w) x(t)
Now substituting the values of x(t), y(t) and i(t) in Eq. (8.24)
x(t) = jwRCH(w) x(t) + H(w)x(t) (8.28)
1
fi H(w) = (8.29)
1 + jw RC
Practice Problem
8.3 Find the transfer function of RLC circuit shown in Fig. 8.4.
Ê 1 ˆ
= Á Ans : H (w ) = ˜
Ë 1 - w LC + jw RC ¯
2
Fig. 8.4
If X(t) is a WSS random process then the output of the system can be expressed as Y(t) = X(t) * h(t)
Linear Systems with Random Inputs 8.13
By definition,
•
Y(t) = Ú X (t )h (t - t ) dt
-•
(8.31)
where mX(t) is the mean function of the process X(t). Since X(t) is stationary in the wide-sense,
mX(t – t) = mX(t) = constant
Hence, the mean function mY(t) of the process Y(t) is
•
mY(t) = E[Y (t )] = m X Ú h(t )dt
-•
(8.32)
We have
•
Ú h (t )e
- jw t
H(w) = dt
-•
When w = 0,
•
H(0) =
Ú h (t )dt
-•
È• • ˘
= E Í h (t 1 )X (t - t 1 )dt 1 X (t + t - t 2 )h (t 2 )dt 2 ˙
Ú Ú
Í ˙
Î -• -• ˚
È• • ˘
= EÍ ÚÚ X (t - t 1 )X (t + t - t 2 )h (t 1 )h (t 2 )dt 1dt 2 ˙
Í ˙
Î -• -• ˚
• •
= Ú Ú E ÈÎ X (t - t )X (t + t - t )˘˚ h (t )h (t )dt dt
-• -•
1 2 1 2 1 2
• •
= Ú Ú R (t + t
-• -•
XX 1 - t 2 )h (t 1 )h (t 2 )dt 1dt 2 (8.34)
Since RXX(t + t1 – t2) is a function of time difference, RYY (t, t + t) will also be a function of time
difference.
So we can say that the output process Y(t) is also a WSS random process.
Note: If the input an LTI system is a WSS random process, the output obtained is also a WSS.
The Mean Square Value
The mean square value of the output process is
• •
Ú Ú X (t - t )X (t - t )h (t )h (t )dt dt
2
E[Y (t)] = E 1 2 1 2 1 2
-• -•
• •
= Ú Ú E ÈÎ X (t - t )X (t - t )˘˚ h (t )h (t )dt dt
-• -•
1 2 1 2 1 2
• •
E[Y2(t)] =
Ú Ú R (t , t )h (t - t )h (t - t )dt dt
-• -•
XX 1 2 1 2 1 2
È • ˘
= E X (t + t ) h (t 1 )X (t - t 1 )dt 1 ˙
Í
Ú
Í ˙
Î -• ˚
Linear Systems with Random Inputs 8.15
•
= Ú h (t )E ÈÎ X (t + t )X (t - t )˘˚ dt
-•
1 1 1
•
= Ú h (t )R (t + t )dt
-•
1 XX 1 1
Let t = –t1
•
RYX = Ú h (-t )R
-•
XX (t - t )dt (8.37)
ÈÊ • ˆ ˘
= E ÍÁ h (t 1 )X (t + t - t 1 )dt 1 ˜ Y (t )˙
ÍÁË Ú ˜¯ ˙
Î -• ˚
•
=
Ú h (t )E ÈÎ X (t + t - t )Y (t )˘˚ dt
-•
1 1 1
•
= Ú h (t )R (t, t + t - t )dt
-•
1 YX 1 1
8.16 Probability Theory and Random Processes
REVIEW QUESTIONS
8. Prove that if the input to a time invariant stable linear system is a WSS process, then the output also
is a WSS process.
•
9. If X(t) is a WSS process and if Y(t) = Ú X (t )h (t - t ) dt
-•
then prove
SYY(w) = F[RYY(t)]
• • •
= Ú Ú Ú R (t
-• -• -•
XX 2 - t 1 - t )h (t 1 )h (t 2 )e - jwt dt dt 1dt 2
• • •
= Ú dt 1 Ú h (t 1 )h (t 2 )dt 2 Ú R (t
XX 2 - t 1 - t )e - jwt dt
-• -• -•
• •
= Ú dt Ú h (t )h (t )S
1 1 2 XX (w ) e- jw (t 2 -t 1 )dt
2
-• -•
• •
= SXX (w ) h (t 1 )e jwt1 dt 1
Ú Ú h (t )e 2
- jwt 2
dt 2
-• -•
Linear Systems with Random Inputs 8.17
= SXX(w) H(–w)H(w)
= SX(w)|H(w)|2
SYY(w) = |H(w)|2 SXX(w) (8.46)
In terms of f
SYY(f) = |H(f)|2 SXX(f ) (8.47)
Equation (8.47) implies that a system may be viewed as a filter that allows selectively certain frequency
components of the input process to pass. Since autocorrelation and power spectrum are a Fourier transform
pair
RYY(t) = F–1[|H(w)|2 SXX(w)] (8.48)
REVIEW QUESTION
10. Show that SYY(w) = |H(w)|2 SXX(w) where SXX(w) and SYY(w) are the power spectral density functions
of the input X(t) and output Y(t) and H(w) is the output transfer function.
Solved Problems
dX (t )
8.5 If X(t) is a differentiable WSS random process and Y (t ) = , find an expression for SYY(w) and
RYY(t). dt
Solution We know,
SYY(w) = |H(w)|2 SXX(w)
Given:
dX (t )
Y(t) =
dt
Taking Fourier transform on both sides, we get
Y(w) = jwX(w)
8.18 Probability Theory and Random Processes
Y (w )
H(w) = = jw
X (w )
SYY(w) = |H(w)|2 SXX(w)
= |jw|2 SXX(w) = w2 SXX(w)
RYY(t) = F–1[SYY(w)]
-d 2
= F -1 Èw 2 SYY (w )˘ = RXX (t )
Î ˚ dt 2
N0
8.6 A zero mean white noise with noise power density is applied to a filter with transfer function
2
1
H(w) =
1 + jw
(a) Find SYX (w) and RYX (t).
(b) Find SYY (w) and RYY (t).
(c) What is the average power of the output.
Solution We know
SYX(w) = H(w) SXX(w)
N 0 /2
=
1 + jw
Ê N / 2ˆ N
RYX(t) = F -1 Á 0 ˜ = 0 e -t t >0
Ë 1 + jw ¯ 2
2 N 0 /2
SYY(w) = H (w ) SXX (w ) =
1+w2
N
RYY(t) = F -1 ÈÎSYY (w )˘˚ = 40 e- t
N0
(c) Average power = RYY (0) =
4
8.7 A random process X(t) is applied to a linear system whose impulse response is h(t) = e–2t t ≥ 0. If
ACF of the process is RXX(t) = e–|t|, find the PSD of the output process Y(t).
Solution The PSD of the input process is
SXX(w) = F[RXX(t)]
•
=
ÚR XX (t )e- jwt dt
-•
Linear Systems with Random Inputs 8.19
• 0 •
Ú Ú Ú
-t
= e e - jwt dt = et e - jwt dt + e -t e - jwt dt
-• -• 0
0 •
(1- jw )t dt + e -(1+ jw )t dt
=
Úe
-•
Ú
0
0 •
e( ) e ( )
1 1- jw t 1 - 1+ jw t
= +
1 - jw -• - (1 + jw ) 0
1 1 2
= + =
1 - jw 1 + jw 1 + w 2
The transfer function of the linear system is
• • •
1 1
H(w) = F(h(t)) =
Ú
-•
Ú
h(t )e - jw t dt = e -2 t e - jw t dt =
0
-(2 + jw )
e - (2 + jw )t
0
=
2 + jw
The PSD of the output process is
SYY(w) = |H(w)|2 SXX(w)
2
1 1
=
2 + jw 1 + w 2
h
8.8 White noise n(t) with G( f ) = is passed through a low-pass RC network with a 3dB frequency fc.
2
(a) Find the autocorrelation R(t) of the output noise of the network.
(b) Find P(t) = R(t)| R(0).
(c) Find t such that P(t) £ 0.1.
h
Given: SNN(w) =
2
SYY(w) = |H(w)|2 SNN(w)
1 Ê hˆ
= Á ˜
1 + w 2 R 2C 2 Ë 2 ¯
8.20 Probability Theory and Random Processes
È h 2 ˘ h 1 R 2C 2
R(t) = F -1 Í ˙=
Î 1 + w 2C 2 R 2 ˚ 2 1
+ w2
R 2C 2
Ï 2 (1 RC ) ¸
h Ô Ô
= F -1 Ì ˝
4 RC
ÓÔ (1 RC )2
+ w 2
˛Ô
h -|t |/ RC
= e
4 RC
R(t )
(b) P(t ) =
R(0)
h -t RC
e
4 RC - t RC
= =e
h
4 RC
8.9 The input voltage to an RL, LPF circuit is a stationary random process X(t) with E[X(t)] = 2 and
RXX(t) = 4 + e–2|t|. Let Y(t) be the voltage across the resistor. Find E[Y(t)] and SYY(w).
1
|H(w)|2 =
w 2 L2
1+
R2
E[Y(t)] = H(0) E[X(t)] = E[X(t)] = 2 (∵ H(0) = 1)
–2|t|
SXX(w) = F[RXX(t)] = F[4 + e ]
–2|t |
= 4F(1) + F[e ]
4
= 4 ÈÎ2pd (w )˘˚ +
4 + w2
4
= 8pd (w )+
4 + w2
SYY(w) = |H(w)|2 SXX(w)
1Ï 4 ¸
= Ì8pd (w )+ ˝
Ê wL ˆ Ó
2
4 + w2 ˛
1+ Á
Ë R ˜¯
8.10 A WSS random process X(t) with PSD SXX(w) is applied as input to the following system. Find
SYY(w).
Fig. 8.7
wT
= 4 cos2
2
The PSD
SYY(w) = |H(w)|2 SXX(w)
= 4 cos2(wT/2) SXX(w)
8.11 A signal x(t) = u(t) e–at is applied to a network having impulse response h(t) = Wu(t) e–Wt, Here, a
and W are real positive constants. Find the network response.
Ú
y(t) = W e -at e -W (t -t ) dt
0
t
Ú
= W e -Wt e - (a -W )t dt
0
-Wt
e t We -Wt È - (a -W )t ˘
=W e - (a -W )t = e - 1˚
-(a - W ) 0 W -a Î
W È -a t
= e - e -Wt ˘˚
W -a Î
W È -a t
y(t) = e - e -Wt ˘˚
W -a Î
8.12 The input to an RLC series circuit is a stationary random process X(t) with E[X(t)] = 2 and RXX(t)
= 4 + e–2|t|. Let Y(t) be the voltage across the capacitor. Find E[Y(t)] and PSD of Y(t).
Solution Let i(t) be the current flowing through the circuit show in Fig. 8.8.
Fig. 8.8
Linear Systems with Random Inputs 8.23
1
=
1 - w LC + jw CR
2
1
|H(w)|2 =
(1 - w 2 LC )2 + w 2C 2 R 2
The PSD of Y(t) is
SYY(w) = |H(w)|2 SXX(w)
SXX(w) = F[RXX(t)] = F[4 + e–2|t|]
È 2(2) ˘
= 4 Í2pd (w ) + 2 ˙
Î 2 +w2 ˚
4
= 8pd (w )+
4 + w2
E[Y(t)] = mXH(0)
H(0) = 1; mY = E[X(t)] = 2
fi E[Y(t)] = 2
8.13 A WSS random process X(t) is applied to the input of an LTI system whose impulse response is
5te–2t. The mean of X(t) is 3. Find the mean output of the system.
5
H(0) =
4
15
The mean value of Y(t) = E[Y(t)] = mXH(0) = 3(5/4) = .
4
h
8.14 A white Gaussian noise process of zero mean and PSD is applied to
a high pass filter as shown in Fig. 8.9. 2
h -1 È w 2 L2 ˘
= F Í 2 2 2˙
2 ÎÍ R + w L ˚˙
Linear Systems with Random Inputs 8.25
h -1 ÏÔ R2 ¸Ô
= F Ì1 - 2 ˝
ÓÔ R + w L ˛Ô
2 2 2
hÈ 1 - R |t | L ˘
= d (t )- e
2 ÍÎ 2 ˙
˚
h h - R |t | L
= d (t )- e
2 4
mY(t) = H(0) mX(t)
Since mX(t) = 0; mY(t) = 0
Var[Y(t)] = E[Y2(t)] – m2Y (t)
= E[Y2(t)]
E[Y2(t)] = RYY(t)|t = 0
Ïh h -R t L¸
= Ì d (t )- e ˝
Ó2 4 ˛ t =0
h h h
= - =
2 4 4
8.15 A stationary random process X(t) having an autocorrelation
function RXX(t) = e–2|t| is applied to the network as shown in Fig. 8.10.
Fig. 8.10
Solution
(a) Given: RXX(t) = e–2|t|
2 (2 ) 4
SXX(w) = F ÈÎ RXX (t )˘˚ = F Èe -2 t ˘ = =
ÎÍ ˚˙ w 2 + 22 w 2 + 4
(b) The phasor form of the network is shown in Fig. 8.11. From Fig. 8.12 we find
X (w )ZC
Y(w) =
Z p + ZC
8.26 Probability Theory and Random Processes
Ê ˆ
2Á 1
Ë (2 jw )˜¯ 2
Zp = =
Ê 1 ˆ 1 + j 4w
ÁË 2 + 2 jw ˜¯
2 1
Zp + ZC = +
1 + j 4w jw
1 + j 6w
=
jw (1 + j 4w )
Ê1 ˆ
Y (w )ZC Ë jw ¯ 1 + j 4w
H(w) = = = =
X (w ) Zp + ZC (1 + j 6w ) 1 + j 6w
( jw (1 + 4 jw ) )
SYY (w) = |H(w)|2 SXX (w)
1 + 16w 2
= SXX (w )
1 + 36w 2
1 + 16w 2 Ê 4 ˆ
= Á ˜
1 + 36w 2 Ë w 2 + 4 ¯
8.16 A random process X(t) has an autocorrelation function RXX(t) = A2 + Be–|t|, where A and B are
positive constants. Find the mean value of the response of a system having an impulse response.
h(t) = e–kt for t > 0
= 0 for t < 0
where k is a real positive constant for which X(t) is its input.
Solution Given:
RXX(t) = A2 + Be–|t|
The mean value of the response is
mY(t) = mX(t) H(0)
We know that
{E[X(t)]}2 = lim RXX (t )
t Æ•
= lim ÈÍ A2 + Be
-2 t ˘
= A2
t Æ• Î ˚˙
fi E[X(t)] = A
Linear Systems with Random Inputs 8.27
8.17 A WSS process X(t), with mean value 10 and power spectrum
4
SXX(w) = 25pd(w) + is applied to a network with impulse response h(t) = 2e–2|t|
2
Êwˆ
1+ Á ˜
Ë 4¯
(a) Find H(w) of the network. (b) Find the mean and power spectrum of the response Y(t).
Solution
Ï 2 (2 ) Ô¸
H(w) = F È2e -2 t ˘ = 2 ÔÌ
8
(a)
2˝
= 2
ÎÍ ˚˙
ÓÔ w + 2 ˛Ô w + 4
2
Fig. 8.13
Ú A sin (w t + q )We
- W ( t -t )
= 0 u(t - t )dt
-•
u(t – t) = 1 for t – t ≥ 0
= 0 for t – t < 0 or t > t
t t
Ú sin (w t + q )e Ú sin (w t + q )e
- W ( t -t ) -Wt Wt
Hence Y(t) = AW 0 dt = AWe 0 dt
-• -•
e ax
We know Úe
ax
sin bx = [a sin bx - b cos bx ]
a + b2
2
t
ÏÔ eW t ¸
fi Y(t) = AWe -W t
Ì 2 ÈW sin(w 0t + q ) - w 0 cos (w 0t + q )˘˚ Ô˝
2 Î
ÓÔ W + w 0 ˛Ô -•
AW
= ÈW sin(w 0 t + q ) - w 0 cos (w 0 t + q )˘˚
W + w 02 Î
2
È w0 ˘
AW W
= Í sin(w 0 t + q ) - cos (w 0 t + q )˙
W 2 + w 02 ÍÎ W 2 + w 02 W 2 + w 02 ˙
˚
W w0
Let cos f = and sin f =
W 2
+ w 02 W 2 + w 02
Êw ˆ
then f = tan -1 Á 0 ˜
ËW¯
AW
Y(t) = ÎÈsin(w 0 t + q - f )˚˘
W 2 + w 02
3
8.19 A random noise process X(t) having power spectrum SXX (w ) = is applied to a network for
49 + w 2
which h(t) = t2e–7t u(t). The network response is denoted by Y(t).
(a) What is the average power of X(t)?
(b) Find the power spectrum of Y(t).
(c) Find the average power of Y(t).
Linear Systems with Random Inputs 8.29
3
Solution Given: SXX (w ) = and h(t) = t2 e–7t u(t)
49 + w 2
The ACF is given by
È 3 ˘ È 3 ˘
RXX(t) = F -1 ÈÎSXX (w )˘˚ = F -1 Í 2˙
= F -1 Í 2 ˙
Î 49 + w ˚ Î7 +w2 ˚
We have
È 2a ˘
F -1 Í 2 ˙=e
- a|t |
Î a + w2 ˚
d2 Ê 1 ˆ d Ê -j ˆ -2
F ÈÎt 2 e -7t u(t )˘˚ = (-1)2 2 Á 7 + jw ˜
= Á ˜=
dw Ë ¯ dw ÁË (7 + jw ) ˜¯ (7 + jw )3
2
p /2 p /2 3
12 12 Ê 1 + cos 2θ ˆ
=
2p (7) 7 Ú
- p /2
cos6 θdθ =
2p (7) 7 Ú
- p /2
ÁË 2 ˜¯ dθ
8.30 Probability Theory and Random Processes
p /2
12
= Ú (1 + cos 2θ )3 dθ
16p (7)
7
- p /2
3 15
= {2.5p} =
4p (7)7 8(7)7
15
PY =
8(7)7
8.20 X(t) is a stationary random process with zero mean and autocorrelation RXX(t) = e–2|t| is applied to
1
a system with transfer function H (w ) = . Find the mean and PSD of its output.
jw + 2
Solution Given:
1
RXX(t) = e–2|t| and H (w ) =
jw + 2
-2 t 2(2) 4
SXX (w) = F[e ]= =
4 +w 2
4 +w2
The PSD of the output
SYY(w) = |H(w)|2 SXX(w)
1 Ê 4 ˆ 4
= ÁË 2˜
=
4 +w 4 +w
2 ¯ (4 + w 2 )2
Linear Systems with Random Inputs 8.31
4
SYY(w) =
(4 + w 2 )2
Since mean of input X(t) is zero, the mean of output is also equal to zero.
mY = H(0) mX = 0
8.21 A random process X(t) is applied to a network with impulse response h(t) = u(t) te–bt where b > 0 is
a constant. The cross-correlation of X(t) with the output Y(t) is known to have the same form
RXY(t) = t e–bt u(t)
(a) Find the autocorrelation of Y(t). (b) What is the average power in Y(t)?
Solution Given:
h(t) = u(t) te–bt
SXY(w) = H(w) SXX(w)
1
H(w) = F ÈÎte - bt u (t )˘˚ =
(b + jw )2
1
SXY(w) = F ÈÎte - bt u (t )˘˚ =
(b + jw )2
SXY (w )
SXX(w) = =1
H (w )
2 1
The PSD of the output SYY(w) = H (w ) SXX (w ) = .
( b + w 2 )2
2
h
8.22 A random process n(t) has a power spectral density G ( f ) = for - • £ f £ • . The random
2
process is passed through a low-pass filter which has transfer function H(f) = 2 for –fm £ f £ fm and H(f) =
0 otherwise. Find the PSD of the waveform at the output of the filter.
h
Solution Given: G ( f ) = for - • £ f £ •
2
H(f) = 2 for – fm £ f £ fm
=0 otherwise
SYY(f) = |H(f)|2 G(f)
= 2h for | f | £ fm
=0 elsewhere.
8.32 Probability Theory and Random Processes
8.23 Find the input correlation function, output correlation and output spectral density of an RC low-
pass filter, where the filter is subjected to a white noise of spectral density N0/2.
ÈN ˘ N
Input correlation = F -1 Í 0 ˙ = 0 d (t )
Î 2 ˚ 2
È 1 N0 ˘
Output correlation = F -1 ÈÎ SYY (w )˘˚ = F -1 Í 2 2 2 2 ˙
Î1 + w R C ˚
È Ê ˆ˘
N 0 -1 Í 1 Á 1 ˜˙
= F Í 2 2 Á ˜˙
2 ÍR C Áw + 1
2 ˜˙
ÍÎ Ë R 2C 2 ¯ ˙˚
N 0 - t RC
= e
4 RC
8.24 A wide sense stationary random process X(t) with autocorrelation function RXX(t) = Ae–a|t| where A
and a are real positive constants, is applied to the input of an linear transmission input system with impulse
response h(t) = e–btu(t) where b is a real positive constant. Find the autocorrelation of the output Y(t) of the
system.
Practice Problems
8.4 If X(t) is a band limited process such that SXX(w) = 0 where |w| > s, prove that [2RXX(0) – RXX(t)] £ s2t2 RXX(0).
Solved Problem
8.25 Assume a random process X(t) is given as input to a system with transfer function H(w) = 1 for –w0
N
£ w £ w0. If the autocorrelation function of the input process is 0 d (t ). Find the autocorrelation function
of the output process. 2
N0
Solution Given: RXX (t ) = d (t ) and H (w ) = 1 for - w 0 £ w £ w 0
2
N
SXX(w) = F ÈÎ RXX (t )˘˚ = 0
2
SYY(w) = |H(w)|2 SXX(w)
N0 N0
= (1)2 =
2 2
The output ACF RYY(t) = F–1[SYY(w)]
w0
1 N 0 jwt
=
2p Ú
-w 0
2
e dw
w0
N 0 e jwt N 0 Ê e jw 0t - e - jw 0t ˆ sin w 0t
= = Á ˜ = N0
2 2p jt 2 Ë 2p jt ¯ 2pt
-w 0
Practice Problems
N0
8.5 If Y(t) be the derivative of X(t), and limited white noise process with S XX (w ) = for w < W
2
(a) Find SYY(w) and RYY(t), (b) What is the average of the output?
8.6 A system has an impulse response h(t) = e–bt u(t). Find the power spectral density of the output Y(t) corresponding
to the input X(t).
Solved Problems
8.26 Two networks with identical impulse response h(t) = We–Wt u(t) are connected in cascade. The
input to first network is X(t) = e–a t u(t). Find the output of second network.
8.34 Probability Theory and Random Processes
Fig. 8.14
h ¢ (t ) = h (t )* h (t )
t
-W (t -t )
Ú We
= -W t
We dt
0
t Èt ˘
Ú
= W 2 e -W t e -Wt eW t dt = W 2 Í dt ˙ e -Wt
Í ˙ Ú
0 Î0 ˚
= W2te–Wt u(t)
•
t
-W t -a (t -t )
ÚW te
2
= e dt
0
t
= W 2 e -a t t e -(W -a )t dt
Ú0
Ï -(W -a )t
t t¸
e (
- W -a )t
2 -a t Ô e Ô
=W e Ì -t - ˝
Ô
Ó
W -a
0
(W - a )2 Ô
0˛
=
W2
(W - a )
{-te 2
-Wt
(W - a )- e-Wt + e-a t }
=
W2
(W - a ) 2 {e -a t
(
- e -Wt 1 + (W - a )t )}
Linear Systems with Random Inputs 8.35
ÏA 0 < t < T
If x(t ) = Ì
Ó0 elsewhere
Find the response of the system.
Solution
ÏA 0 < t < T
Given: x (t ) = Ì
Ó0 elsewhere
ÔÏt 3 e - t t > 0
h (t ) = Ì
ÔÓ 0 t < 0
t t -T
Ú Úte
3 -t 3 -t
= A t e dt - A dt
0 0
8.28 For the system shown in Fig 8.16 (a). Find the impulse response h(t) and (b) H(w).
Fig. 8.16
8.36 Probability Theory and Random Processes
- jwt /2 È jwt /2
1 - e jwt e Îe - e - jwt /2 ˘˚
= =
jwt jwt
sin (wt / 2 )
= e - jwt /2
wt / 2
8.29 Find the transfer function of the network shown in Fig. 8.18.
Fig. 8.18
Solution If input x(t) = ejwt, then the output y(t) = H(w) ejwt fi y(t) = H(w) x(t)
dy (t )
fi = jw H (w )e jw t
dt
= jwH(w) x(t)
Linear Systems with Random Inputs 8.37
Fig. 8.19
i1 = C1
d
dt
{
x (t )- y (t ) }
dy (t ) x (t )- y (t )
C2
dt
=
R
+ C1
d
dt
{ }
x (t )- y (t )
x (t ) H (w )x (t ) Ïd d ¸
C2 jw H (w )x (t ) = - + C1 Ì x (t )- y (t )˝
R R Ó dt dt ˛
x (t ) H (w )x (t )
= - + C1 jw x (t )- C1 jw H (w )x (t )
R R
1 H (w )
jwC2H(w) = - + C1 jw - C1 jw H (w )
R R
È 1 ˘ 1 + jw C1 R
H (w )Í jw C2 + + jw C1 ˙ =
Î R ˚ R
1 + jw C1 R
H(w) =
1 + jw R (C1 + C2 )
or
The phasor form of the circuit is shown in Fig. 8.20.
8.38 Probability Theory and Random Processes
1 R jw C1 R
From Fig. 8.20, Zp = R= =
jw C1 R + 1 jw C1 1 + jw RC1
X (w ) X (w )Z s
I(w) = , and Y (w ) =
Z p + Zs Z p + Zs
1
Y (w ) Zs jw C2
H(w) = = = Fig. 8.20
X (w ) Z p + Z s R
+ 1
1 + jw RC1 jw C2
1 + jw RC1
=
1 + jw RC1 + jw RC2
1 + jw RC1
H(w) =
1 + jw R(C1 + C2 )
Practice Problem
Solved Problems
8.30 For the network shown in Fig. 8.23, find transfer function.
Fig. 8.23
Linear Systems with Random Inputs 8.39
I (w )R2
V0(w) =
1 + jw C2 R2
R2
1 + jw C2 R2
=
R1 R2
+
1 + jw C1 R1 1 + jw C2 R2
R2 (1 + jw C1 R1 )
=
R1 (1 + jw C2 R2 ) + R 2 (1 + jw C1 R1 )
R2 (1 + jw C1 R1 )
=
R1 + R2 + jw R1 R2 (C1 + C2 )
4
8.31 The PSD of a random process is given by SXX (w ) = . Find whether this is a valid spectral
w2 + 4
density or not. If it is transmitted through a system shown in Fig. 8.26, find out the PSD of output.
Fig. 8.26
Solution Given:
4
SXX (w ) =
w +42
8.40 Probability Theory and Random Processes
We can find that SXX(w) > 0 for any value of w and SXX((–w) = SXX(w). Hence it is a valid PSD.
From Fig. 8.26,Y(t) = X(t + 2T) + X(t – 2T)
Applying Fourier transform on both the sides, we get
Y(w) = ej2wT X(w) + e–j2wT X(w)
= 2X(w) cos 2 wT
Y (w )
H(w) = = 2 cos 2w T
X (w )
Ê 4 ˆ
= 4 cos2 w T Á 2
Ë w + 4 ˜¯
8.32 A zero mean random process is applied as input to two filters as shown in Fig. 8.27.
h
The PSD of the input is SXX (w ) =
2
The impulse response of filters is
h1(t) = e–t u(t)
Ï1 for 0 £ t £ 1
h2 (t ) = Ì
Ó0 otherwise
Fig. 8.27
Find H1(ejw), H2(ejw), E[Y1 (t)] and E[Y2(t)]
Solution Given:
h
SXX (w ) =
2
h1(t) = e–t u(t);
Ï1 for 0 £ t £ 1
h2 (t ) = Ì
Ó0 otherwise
1
H1(w) = F ÈÎe - t u (t )˘˚ =
1 + jw
• 1 1
1 - jw t
H2(w) = Ú h2 (t )e - jw t dw = e - jw t dw = -
Ú e
jw
-• 0 0
1 - e - jw
=
jw
Linear Systems with Random Inputs 8.41
Solution
64
|H(w)|2 =
(16 + w 2 )2
8
|H(w)| = = H (w )
16 + w 2
h(t) = ?
2a
We know F Èe - a t ˘ =
ÎÍ ˚˙ a 2 + w 2
If a = 4
8 -1 È 8 ˘
F ÈÍe ˙˚ 16 + w 2 fi h (t ) = F ÍÎ 16 + w 2 ˙˚ = e
-4 t ˘= -4 t
Î
Solution Given:
RXX(t) = cos (w0t)
SXX(w) = F[cos w0t]
= p[d(w – w0) + d(w + w0)]
SYY(w) = |H(w)|2 SXX(w)
64
= p ÈÎd (w - w 0 ) + d (w + w 0 )˘˚
(16 + w 2 )2
8.42 Probability Theory and Random Processes
SYY(w) = H(w)SXX(w)
=
8p
16 + w 2
{d (w - w 0 )+ d (w + w 0 )}
8.35 X(t) is a WSS process which is input to a linear system with the transfer function.
1
H(w) =
a + jw
where a > 0. If X(w) is a zero mean white noise with PSD N0/2, determine the following:
(a) h(t) (b) SXY(w) (c) RXY(t) (d) RYX(t) (e) SYX(w) (f) SYY(w)
Solution Given:
1
H(w) =
a + jw
È 1 ˘
h(t) = F -1 Í ˙ = e u (t )
- at
Î a + jw ˚
SXX(w) = N0/2
1 Ê N0 ˆ
SXY(w) = H (w )SXX (w ) =
a + jw ÁË 2 ˜¯
ÈN 1 ˘ N 0 - at
RXY(t) = F -1 Í 0 ˙= e u(t )
Î 2 a + jw ˚ 2
1 Ê N0 ˆ
SYX(w) = H * (w )SXX (w ) =
a - jw ÁË 2 ˜¯
È ˘ N 0 aT
RYX(t) = F -1 Í N 0 ˙= e u(t )
ÎÍ 2 (a - jw )˚˙ 2
SYY(w) = |H(w)|2 SXX(w)
Ê N0 ˆ
1
= Á ˜
a +w Ë 2 ¯
2 2
8.36 White noise with power density 10 W/Hz is applied to a system with impulse response
ÏÔte -Wt 0 < t
h (t ) = Ì
ÔÓ0 t<0
Find the mean square value of the response.
• •
= 10
Ú Ú d (t
-• -•
1 - t 2 )t 1e -W t1 t 2 e -W t 2 dt 1dt 2
Ú
= 10 t 22 e -2wt 2 dt 2
0
È 2 -2wt • • •˘
t e 2t e -2wt e -2wt
= 10 Í 2 - - ˙
Í -2w 4w 2 4w 3 ˙
Î 0 0 0 ˚
10 2.5
= 3
=
4w w3
8.37 A stationary random process X(t) is applied to the input of a system for which h(t) = t2 e–8t u(t). If
E[X(t)] = 5. What is the mean value of the system’s response Y(t)?
Ú h(t ) dt = 5Ú t e
2 -8 t
mY = m X dt
-• 0
È 2 -8t • • •˘
-t e 2te -8t 2e -8t ˙ 5
= 5Í - + =
Í 8 64 -8 (64 ) ˙ 256
Î 0 0 0 ˚
5
mY =
256
Practice Problem
8.9 A random process X(t) is applied to a network with impulse response h(t) = e–bt u(t), where b > 0 is a constant. The
cross correlation of X(t) with the output Y(t) is known to have the same form
RXY(t) = te–bt u(t)
(a) Find the autocorrelation of Y(t).
(b) What is the average power of Y(t)?
8.44 Probability Theory and Random Processes
Solved Problems
Solution Given:
h(t) = te–bt u(t)
and RXY(t) = te–bt u(t)
We know
RYY(t) = RXY(t) *h(–t)
•
=
-•
Ú R (t + t )h (t )dt
XY 1 1 1
•
- b (t +t 1 )
= Ú (t + t )e
0
1 t 1e - bt1 dt 1
Ú
= e - bt (t 12 + tt 1 ) e -2 bt1 dt 1
0
Ï 2 -2 bt1 • • •¸ Ï • •¸
-t e 2t 1e -2 bt1 e -2 bt1
-2 bt
Ô - bt Ô -t 1e 1 e -2 bt1
RYY(t) = e - bt ÔÌ 1 - - ˝+e t Ì -
Ô
˝
2b 2 3 2
ÔÓ 0
4b 0
4b 0 Ô
˛ ÔÓ 2b 0
4b 0 Ô
˛
Ï 1 t ¸ Ê 1 + bt ˆ
= e - bt Ì 3 + 2 ˝ = e - bt Á ˜
Ó 4b 4b ˛ Ë 4b3 ¯
Since RYY(t) is an even function of t, for any t,
1+ b t -b t
RYY(t) = e
4b3
1
Power = E[Y2(t)] = RYY (0 ) =
4b3
Linear Systems with Random Inputs 8.45
8.39 Two identical networks each with impulse response h(t) = te–2t u(t) are cascaded. A wide-sense
stationary process X(t) is applied to the system. Find Y(t) and E[Y(t)] if E[X(t)] = 10.
Solution The system is shown in Fig. 8.28,
Fig. 8.28
= Ú h (t )Y (t - t ) dt
-•
2 1 2 2
• •
= Ú h (t 1 ) h (t 1 )X (t - t 1 - t 2 ) dt 1dt 2
Ú
-• -•
• •
=
Ú Ú X (t - t
-• -•
1 - t 2 )h (t 1 )h (t 2 ) dt 1dt 2
• •
=
Ú Ú X (t - t
-• -•
1 - t 2 ) t 1e -2t1 t 2 e -2t 2 u(t 1 ) u(t 2 ) dt 1dt 2
• •
Ú Ú X (t - t - t 2 )t 1t 2 e ( 1 2 )dt 1dt 2
-2 t +t
= 1
-• -•
È• • ˘
X (t - t 1 - t 2 )t 1t 2 e ( 1 2 )dt 1dt 2 ˙
ÚÚ
-2 t +t
E[Y(t)] = E Í
Í ˙
Î -• -• ˚
2
Ï• ¸
= E [X ]ÔÌ (t e -2t ) dt Ô˝
ÔÓ 0
Ú Ô˛
2
Ï • •¸
t e -2t e -2t
= 10 ÔÌ - +
Ô
˝
ÔÓ 2 4
0 0 Ô
˛
8.46 Probability Theory and Random Processes
2
Ê 1ˆ 10 5
E[Y] = 10 Á ˜ = =
Ë 4¯ 16 8
8.40 A wide-sense stationary process X(t) with mean value 10 and power spectrum
SXX(w) = 10pd(w) + 4/[1 + (w/4)2]
is applied to a network with impulse response h(t) = 2e–2|t|
(a) Find H(w) for the network.
(b) Find E[Y] and (c) SYY(w)
Solution Given: h(t) = 2e–2|t|
2 (2 ) 8
H(w) = F È2e -2 t ˘ = 2 F Èe -2 t ˘ = 2 =
ÎÍ ˚˙ ÎÍ ˚˙ 2 +w
2 2
4 + w2
(b) E[Y] = E[X] H(0)
8
= 10 = 20
4 + w2 w =0
2
Ê 8 ˆ
= 10pd (w ) H (0 ) + {4 /[1 + (w /4)2 ]} Á
2
Ë 4 + w 2 ˜¯
2
Ê 8 ˆ
= 40pd (w )+ {4 /[1 + (w /4)2 ]} Á
Ë 4 + w 2 ˜¯
8.41 A white noise with RNN(t) = 0.1 d(t) is applied to a network with impulse response
h(t) = te–2t u(t)
(a) Find the network’s output noise power in a 1-ohm resistor.
(b) Obtain an expression for the output power spectrum.
|H(w)|2 = 1
(4 + w 2 )2
Output noise power
• •
E[Y2(t)] = Ú ÚR
-• -•
NN (t 1 - t 2 ) h(t 1 ) h(t 2 ) dt 1dt 2
Linear Systems with Random Inputs 8.47
•
= 0.1 Út
0
2 -4t
e dt
Ï 2 -4t • • •¸
-t e 2t e -4t e -4t
= 0.1 ÔÌ Ô 1
- - ˝=
4 16 32 Ô 320
ÓÔ 0 0 0 ˛
0.1
SYY(w) = H (w ) 2 SNN (w ) =
(4 + w 2 )2
8.42 A stationary random signal X(t) has an autocorrelation function RXX(t) = 5e–2|t|. It is added to a
white noise for which N0/2 = 10–2 and sum is applied to a filter having a transfer function
2
H(w) =
(2 + jw )2
If the noise is independent of X(t),
(a) Find the signal component of the output power spectrum and the average power in the output
signal.
(b) Find the power spectrum of, and average power in the output noise, and
(c) What is the ratio of the output signal’s power to the output average noise power?
Solution Given: RXX(t) = 5e–2|t|
2(2) 20
SXX(w) = 5 =
22 + w 2 4 +w2
2
H(w) =
(2 + jw )2
4
Power transfer function H (w ) 2 =
(4 + w 2 )2
Signal component of output power spectrum
2 4 20 80
= H (w ) SXX (w ) = =
(4 + w 2 )2 4 + w 2 (4 + w 2 )3
Average power in the output signal
•
1 80
P=
2p Ú (4 + w
-•
2 3
)
dw
8.48 Probability Theory and Random Processes
Let w = 2 tan q,
q = -p / 2 when w = -•
then dw = 2 sec2 q dq
q = p / 2 when w = •
p /2
1 80 (2 sec 2 q )
Ps =
2p Ú
- p /2
43 (sec 2 q )3
dq
p /2 p /2 2
5 5 Ê 1 + cos 2q ˆ
=
4p Ú
- p /2
cos4 q dq =
4p Ú
- p /2
ÁË 2 ˜¯ dq
p /2
5
=
16p Ú
- p /2
(1 + cos2 2q + 2 cos 2q ) dq
5 Ê 3 ˆ 15
= p =
16p ÁË 2 ˜¯ 32
The noise output power
N0 4 2N0
SNN(w) = =
2 (4 + w 2 )2 (4 + w 2 )2
Average noise power
•
1 2N0
PN =
2p Ú (4 + w
-•
2 2
)
dw
•
N0 1
=
p Ú
-•
(4 + w 2 )2
dw
p /2
N0 1
=
p Ú
- p /2
42 (sec 2 q )2
2 sec 2 q dq
p /2
N0 N0 Ê p ˆ N0
=
8p Ú
- p /2
cos2 q dq = =
8p ÁË 2 ˜¯ 16
N0
Given, = 10 -2
2
10 -2
PN =
8
Output signal's power 15/32
=
Output average noise power (10 -2 /8)
15 8
= . = 375
32 10 -2
Linear Systems with Random Inputs 8.49
8.43 A white noise with power density N0/2 is applied to a low-pass network for which |H(0)| = 2; it has
a noise bandwidth of 2 MHz. If the average output noise power is 0.1 W in a 1 W resistor, what is N0?
Solution Given: |H(0)| = 2; WN = 2 MHz; PYY = 0.2
We know
N0 2
PYY = H (w 0 ) WN
2p
For a low-pass filter,
N0 2
PYY = H (0) WN
2p
That is,
N0 2
PYY = H (0) WN = 0.1
2p
0.1 (2p )
2
H (0) = 4
N0 = 2
H (0) WN WN = 2p (2 ¥ 106 )
0.1 (2p )
= = 1.25 ¥ 10 -8 W/Hz
4 ¥ 2p ¥ 2 ¥ 106
Fig. 8.29 (a) An arrangement to measure power density spectrum of a low-pass process
(b) Power spectrum of X(t) (c) Power transfer function of BPF
8.50 Probability Theory and Random Processes
The power
•
1
PYY(wf) =
2p ÚS
-•
YY (w f ) dw (8.49)
•
1
Ú H (w ) SXX (w )dw
2
fi PYY(wf) = (8.51)
2p
-•
For real filters and real X(t), the PSD of X(t) and the power transfer function are even functions of w.
Therefore, we can write
1 ÏÔ • ¸Ô
Ú
2
PYY(wf) = Ì2 H (w ) SXX (w ) dw ˝
2p
ÓÔ 0 ˛Ô
•
1
Ú
2
= SXX (w f ) H (w ) dw
p
0
•
1
Ú
2
S (w ) H (w ) dw (8.52)
p XX f
0
2
SXX (w f ) H (w f ) WN
=
p
•
Ú H (w )
2
dw
WN = 0 (8.53)
2
H (w f )
is known as noise bandwidth.
Rearranging Eq. (8.52) and substituting Eq. (8.53), we get
p PYY (w f )
SXX(wf) = 2 (8.54)
WN H (w f )
That is, the approximate SXX(w) can be obtained by measuring the power at the output of the filter when
the filter is tuned to w = wf and multiplying the same with a constant p|{WN|H(wf)|2}.
Linear Systems with Random Inputs 8.51
Noise Bandwidth
Consider a system with low-pass characteristics. Let the system transfer function be H(w). If the system input
is white noise with PSD N0/2, then the average power at the output of the system is
•
1 Ê N0 ˆ
Ú ÁË
2
PYY = H (w ) dw (8.55)
2p 2 ˜¯
-•
If the system impulse response is real then the power transfer function |H(w)|2 will be an even function of
w. Therefore, Eq. (8.55) can be written as
•
N0
Ú H (w )
2
PYY = dw (8.56)
2p
0
Now consider an idealized system that produces the same average power as that of an actual system when
it is excited by the same white-noise source. The power transfer function of such a system is
2
ÏÔ H (0) 2 w < WN
H I (w ) = Ì (8.57)
ÔÓ 0 w > WN
where WN is a constant that makes the output powers in the two systems equal and |H(0)|2 is the power
transfer function value at midband.
The output power in the idealized system is
• WN
1 1
Ú Ú
2 2
PY¢Y = H I (w ) SXX (w )dw = H (0) SXX (w )
2p 2p
-• -WN
N
Since |H(0)|2 is even and SXX (w ) = 0 ,
2
WN
2 2ÊN ˆ
P¢YY =
2p Ú
0
H (0) Á 0 ˜ dw
Ë 2 ¯
WN 2
N0 N 0 H (0) WN
Ú
2
= H (0) dw =
2p 2p
0
from which
•
Ú H (w )
2
dw
WN = 0 (8.58)
2
H (0)
WN is called the noise bandwidth of the system.
Let the input be applied to an idealized system with bandpass transfer function having a centre band
frequency w0. The power transfer function of the system is shown in Fig. (8.30).
8.52 Probability Theory and Random Processes
Mathematically,
Ï H (w )2 wN w
Ô 0 for w 0 - £ w £ w 0 + N and
2 2
Ô
| H (w )2 | = Ì wN wN
Ô - w0 - £ w £ - w0 +
2 2
Ô
Ó 0 elsewhere
Therefore, we can write
• Ï -w 0 +w N /2 ¸ w 0 +w N /2
N0 1 Ô N0 Ô 1 N0
Ú Ú Ú
2 2 2
H (w ) dw = Ì H (w 0 ) dw ˝+ H (w 0 ) dw
2p 2p Ô 2 Ô˛ 2p 2
0 Ó 0 N
- w - w /2 w 0 -w N /2
1 Ê N0 2ˆ
= ÁË 2 H (w 0 ) ˜¯ ÈÎ2WN ˘˚
2p
•
Ú H (w )
2
dw
WN = 0
2
H (w 0 )
Solved Problems
8.44 Find the noise bandwidth of the system having the power transfer function
1
|H(w)|2 =
2
È1 + (w W )2 ˘
Î ˚
Solution
•
Ú H (w )
2
dw
0
Noise bandwidth WN = 2
H (0)
|H(0)|2 = 1
Linear Systems with Random Inputs 8.53
• • Let
1
Ú Ú È1 + (w /W ) ˘
2
H (w ) dw = dw w = W tan q
2
2
0 0
Î ˚ dw = W sec 2 q dq
p /2
W sec 2 q
=
Ú (1 + tan
0
2
q )2
dq
p /2 p /2
Ê 1 + cos 2q ˆ pW
=W
Ú 0
cos2 q dq = W Ú ÁË
0
2 ˜¯ dq = 4
d È
H (w ) ˘˙
2
Solution To find w0 for w at which |H(w)|2 is maximum, find
dw ÎÍ ˚
Ï ¸
d Ô w4 Ô
Ì ˝
dw Ô È1 + (w /W )2 ˘ 4 Ô
ÓÎ ˚ ˛
4 3
È1 + (w /W 2 )2 ˘ 4w 3 - 4w 4 È1 + (w /W 2 )˘ 2(w /W )(1/W )
= Î ˚ Î
8
˚
È1 + (w /W ) ˘
2 2
Î ˚
Equating the above term to zero and solving for w0,
fi [1 + (w/W)2] 4w3 = 4w4(2w/W2)
1 + (w/W)2 = 2w2/W2
1 + (w2/W2) + 2(w/W) = 2w2/W2
(w/W)2 – 2 (w/W) – 1 = 0
w0 = (1 + 2 )W
w4
(b) |H(w0)|2 = 4
È1 + (w /W )2 ˘
Î ˚ w =w 0
= (1 + 2 )4 W 4 33.97W 4
4
= = 0.0156W 4
È1 + (1 + 2 )2 ˘ 2174.116
Î ˚
8.54 Probability Theory and Random Processes
• Let
Ú
2
H (w ) dw w = W tan q
WN = 0
2 dw = W sec 2 q dq
H (w 0 )
• •
w4
Ú
2
0
H (w ) dw =
Ú È1 + (w /W ) ˘ 2
4
dw
0
Î ˚
p /2
tan 4 q sec 2 q dq
= ÚW
5
0
(sec 2 q )4
p /2
=
ÚW
0
5
sin 4 q cos2 q dq
p 5
= W
32
(p /32)W 5
WN = = 6.29 W
0.0156 W 4
1
Solution The transfer function of a low-pass RC filter is H (w ) =
1 + jw RC
1
The magnitude is H (w ) =
1 + w 2 R 2C 2
•
Ú H (w )
2
dw
WN = 0
2
H (0)
• •
1
Ú Ú
2
H (w ) dw = dw
0 0 1 + w 2 R 2C 2
Linear Systems with Random Inputs 8.55
• •
1
Ú Ú
2
H (w ) dw = dw
0 0 1 + w 2 R 2C 2
•
1 1 1
=
RC Ú
0 1+ u 2
du Let w RC = u, then dw =
RC
du
1 • p
= tan -1 u =
RC 0 2 RC
8.47 What is the noise equivalent bandwidth of an ideal BPF with bandwidth W?
Solution
•
Ú H (w )
2
dw
WN = 0
2
H (w 0 )
• w 0 + W /2
w 0 + W /2
Ú Ú
2 2
H (w ) dw = H (w ) dw = w =W
w 0 - W /2
0 w 0 - W /2
2
H (w 0 ) = 1 for ideal BPF.
8.48 The output of a linear filter is C times the input X(t). Show that
RYY(t) = C2RXX(t)
Solution Given: Y(t) = CX(t)
RYY(t, t + t) = E[Y(t)Y(t + t)]
= E[CX(t) CX (t + t)]
= C2E[X(t) X(t + t)]
= C2 RXX(t)
and RYY(t) = C2RXX(t)
1 Ô E ÈÎ X (t )X (t + t )˘˚ - E ÎÈ X (t - t0 ) X (t + t )˚˘
Ï ¸
= Ô
Ì ˝
t02 Ô- E ÈÎ X (t )X (t - t0 + t )˘˚ + E ÈÎ X (t - t0 )X (t - t0 + t )˘˚ Ô
Ó ˛
1
= È RXX (t )- RXX (t + t0 )- RXX (t - t0 )+ RXX (t )˘
t02 Î ˚
1
È2 RXX (t )- RXX (t + t0 )- RXX (t - t0 )˘˚
t02 Î
=
Solution
t t
1 1
h (t ) =
T Ú
t -T
x(t ¢ ) dt ¢ =
T Ú
t -T
x(t ¢ ) dt ¢
t t -T
1 1
=
T Ú
-•
d (t )dt -
T Ú d (t )dt = u(t ) - u(t - T )
-•
• T
1 - jw t
H(w) = F (h(t )) =
Ú
-•
h(t )e - jw t dt =
T
e Ú
0
dt
1 T 1- e - jw T sin (w T /2 )
= e - jw t = = e - jwT /2
T (- jw ) 0 jw T (w T /2)
N
8.51 A white-noise process wit a PSD of SXX(w) = 0 is passed through a finite time integrator whose
output is given by 2
t
1
Y(t) =
T Ú X (u) du
t -T
Find (a) The PSD of the output process.
(b) The total power in the output process.
(c) The noise equivalent bandwidth of the integrator.
Solution
sin 2 (w T / 2) Ê N 0 ˆ
(w T / 2) ÁË 2 ˜¯
SYY(w) =
sin 2 (w T / 2)
|H(0)|2 = lim =1
w Æ0 (w T / 2)2
• • Let w T / 2 = q
sin 2 (w T / 2)
Ú H (w )
2
dw = Ú (w T / 2)2 dw = dq
2
0 0
T
2 È1 •
sin 2 q ˘
=
T
Í
Í2 Ú q 2
dq ˙
˙
Î -• ˚
1 p
=
T
[p ]= T
The spectral density is defined as power per unit bandwidth. Therefore, the PSD of thermal noise is
Pn
SN = = KT watts/Hz (8.60)
B
The noisy resistor R can be modelled as a noise source Vn(t) in series with
a resistor R.
The average or mean noise voltage across the resistor is zero, but the rms
value is finite. Therefore, we consider rms value of Vn for noise calculations.
Fig. 8.31
Let the noisy resistor R is driving a load resistance RL as shown in Fig. 8.31.
According to maximum power transfer theorem, the circuit delivers maximum power when R = RL. In such a
case, the load is matched to the source and the maximum power delivered to the load is
Vn2 I2R
or n watts
4R 4
where Vn2 and In2 are rms values of voltage and current respectively. Equating the expression for thermal
noise power to the maximum power delivered results
Vn2
= KT Bn (8.61)
4R
Vn2 = 4 R KT Bn (8.62)
Thus, a noisy resistor R can be represented as a noise-free resistor R in series with noise voltage vn(t) with
mean square value.
V2n = 4 RKT Bn volt2 (8.63)
Thus, a noisy resistance can be replaced by a noiseless resistance in parallel with noise current source with
mean square value
In2 = 4 GKT Bn
= I n21 + I n22 + + I nN
2
(8.69)
That is, the mean square values are added to obtain the total mean current
In = I n21 + I n22 + + I nN
2
(8.70)
N a 0s v02 (t ) / 4 R0
= = (8.76)
N as vi2 (t ) / 4 Rs
v02 (t ) Rs
= ◊
R0 vi2 (t )
when N two-port networks with individual gains G1, G2 … GN are connected then the available power gain
N
Ga = ’ Gi (8.77)
i =1
where h is Planck’s constant equal to 6.62 ¥ 10–34 J-S, K is Boltzman’s constant equal to 1.38 ¥ 10–23 J/°K
and T is the temperature in degree Kelvin.
h( f )
Approximating, eh ( f ) / KT + 1 + (8.79)
KT
2 Rh( f )
SR(f) = = 2 RKT V 2 Hz (8.80)
h( f )
1+ -1
KT
Therefore, we can say that the mean square voltage spectral density of thermal noise is constant at
SR(f) = 2RK T V2/Hz (8.81)
A noise resistance with spectral density SR(f ) can be replaced by a noiseless resistance of the same value
with noise source having PSD = 2RK T. If this noise source is connected to a resistive load, then maximum
power density will be delivered which is equal to
SR ( f ) 2 R KT KT
Sn(f) = = = W/Hz (8.82)
4R 4R 2
N0
= W/Hz (8.83)
2
psi psi Ê Sˆ 1 1 Ê Sˆ
= = =Á ˜ = Á ˜ (8.92)
Ê T ˆ Ê T ˆ Ë N ¯ T
i 1+ e F Ë N ¯i
K Ts BN Á 1 + e ˜ N as Á 1 + e ˜
Ë Ts ¯ Ë Ts ¯ Ts
Ê Sˆ
where Á ˜ is the SNR at input and F is known as noise figure.
Ë N ¯i
The standard spot noise figure of a two-part network is defined as the ratio of the output noise power to the
output noise power of a noiseless two-port network at room temperature (T0 = 290°K).
N ao N + Ga N as N an + Ga N as N an + Ga N as
fi F= = an = =
N aos N aos N aos Ga N as
8.64 Probability Theory and Random Processes
N an K Te Bn Ga
= 1+ =1+ (8.93a)
Ga N as K To Bn Ga
Te
fi F0 = 1 + (8.93b)
To
Ê Sˆ 1 Ê Sˆ
ÁË N ˜¯ = F ÁË N ˜¯ (8.94)
0 0 i
Taking logarithm on both sides,
Ê Sˆ Ê Sˆ
10 log Á ˜ = - 10 log F0 + 10 log Á ˜ (8.95)
Ë N ¯0 Ë N ¯i
when a network is used with the source for which, it is intended to operate, F will be called the operating
noise figure
Te
For Fop = 1 + (8.96)
Ts
Ú F ◊ Ts Ga dw
F= 0
•
(8.102)
Ú TsGa dw
0
If Ts is constant,
•
Ú Fs Ga dw
F= 0
• (8.103)
Ú Ga dw
0
For T = 290 °K,
•
Ú F0 Ga dw
F0 = 0
•
(8.104)
Ú Ga dw
0
Ú TsGa dw
0
Ts = •
(8.108)
Ú Ga dw
0
•
Ú TeGa dw
0
and Te = •
(8.109)
Ú Ga dw
0
8.66 Probability Theory and Random Processes
G1 G2 K Ts Bn + G1 G2 K Te1 Bn + G2 K Te2 Bn
= (8.117)
G1 G2 K Ts Bn
Te1 Te2
= 1+ + (8.118)
Ts G1 Ts
We have the relation
Te1
F1 = 1 + fi Te1 = ( F1 - 1) Ts (8.119)
Ts
Te2
and F2 = 1 + fi Te2 = ( F2 - 1) Ts (8.120)
Ts
Substituting Eq. (8.119) and Eq. (8.120) in Eq. (8.118)
( F2 - 1)
F = 1 + ( F1 - 1) +
G1
F2 - 1
= F1 + (8.121)
G1
The generalization of this result to arbitrary number of stages is known as Friis’ formula
F2 - 1 F3 - 1
F = F1 + + + (8.122)
G1 G1 G2
The above formula can be expressed in terms of noise equivalent temperature as
Te Ê Te ˆ Te2 Te3
1+ = Á1 + 1 ˜ + + + (8.123)
TR Ë Ts ¯ Ts G1 Ts G1 G2
Te2 Te3
fi Te = Te1 + + + (8.124)
G1 G1G2
K
= {Ts + Te ] Bn (8.128)
L
Since the attenuator is resistive and assumed to be at the same temperature Ts, as the equivalent resistance
at its input, the available output power is
Nao = KTs Bn (8.129)
Equating Eq. (8.125) and Eq. (8.129), we get
K
KTs Bn = [Ts + Te ] Bn
L
fi TS = Ts + Te
Te = Ts(L – 1) (8.130)
At room temperature, Te = T0 (L – 1) (8.131)
Te
F0 = 1 + fi Te = T0 ( F0 - 1) (8.132)
T0
Comparing Eq. (8.131) and Eq. (8.132), we get
F0 = L
REVIEW QUESTIONS
11. Define (a) noise figure, and (b) Spot noise figure.
12. Derive the mathematical expression of noise figure
13. Explain the concept of effective input noise temperature.
14. For three amplifiers in cascade, derive an expression of overall noise figure in terms of the power
gain and noise figures of the individual amplifiers.
15. Bring out the importance of Friis’ formula.
Solved Problems
8.52 An electronic system has an amplifier followed by a mixer stage. The noise figures of the amplifier
and mixer are 30 dB and 20 dB respectively. If the power gain of the amplifier is 10 dB; calculate the
overall noise figure referred to the input.
8.53 The noise present at the input of a two-part network is 5 mW. The noise figure is F = 0.75 dB. The
receiver gain is 107. Find
(a) Available noise power by two ports.
(b) Output available noise power.
Solution We have, Nao = Nan + Naos
Naos = Ga Nas
Given: Nas = 5mW, Ga = 107 fi Naos = 5 ¥ 10–6 ¥ 107 = 50 W
N ao
F= fi Nao = F(Naos)
N aos
F = 0.75 dB = 1.188 : 1
(a) Nao = (1.188) 50 = 59. 425 W
(b) Nan = Nao – Naos = 59.452 – 50 = 9,425 W
8.54 A low-noise receiver for satellite ground station consists of the following stages:
Antenna with Ti = 125°K
Wave guide with a loss of 0.5 dB
Power amplifier with Ga = 30 dB, Te = 6°K Bn = 20 MHz
TWT amplifier with Ga = 16 dB, F = 6 dB, Bn = 20 MHz
Calculate effective noise temperature of the system.
Ê 2.98 ¥ 290 ˆ
= 125 + 35.385 + 1.12 Á 6 +
Ë 1000 ˜¯
= 168.07
8.55 Two resistors with resistance R1 and R2 are connected in parallel and have physical temperatures
T1 and T2 respectively. Find the effective noise temperature of Ts of an equivalent resistor with resistance
equal to the parallel combination of R1 and R2.
8.70 Probability Theory and Random Processes
Fig. 8.39
4 K T1 Bn 4 K T2 Bn
in2 = +
R1 R2
Ê 1 1 ˆ Ê T R + T2 R1 ˆ
= 4K Ts Bn Á + ˜ = 4 K Bn Á 1 2 ˜¯
Ë R1 R2 ¯ Ë R1 R2
T1 R2 + T2 R1
fi Ts =
R1 R2
Practice Problems
8.10 Repeat Solved Problem 8.55. If three resistors R1, R2 and R3 are connected in parallel.
Ê T1 R2 R3 + T2 R1R3 + T3 R1 R2 ˆ
Á Ans. Ts = R1 R2 + R2 R3 + R1 R3 ˜
Ë ¯
Linear Systems with Random Inputs 8.71
Solved Problems
8.56 For the resistive network shown in Fig. 8.40, find the rms noise voltage appearing at the output
terminals if bandwidth = 50 kHz and T = 290°K.
Fig. 8.40
Solution Let us replace each resistors with a noise voltage source in series with resistance and then find
noise voltage at the output due to each resistor. The noise equivalent circuit is shown in Fig. 8.41.
Fig. 8.41
V01 is the output noise voltage due to V1, V02 is the output noise voltage due to V2 and V03 is the output
noise voltage due to V3.
Ê R3 ˆ Ê R3 ˆ
V01 = V1 Á = 4 K TR1 Bn Á
Ë R1 + R2 + R3 ˜¯ Ë R1 + R2 + R3 ˜¯
Ê R3 ˆ Ê R3 ˆ
V02 = V2 Á = 4 K TR2 Bn Á
Ë R1 + R2 + R3 ˜¯ Ë R1 + R2 + R3 ˜¯
Ê R1 + R3 ˆ Ê R1 + R2 ˆ
V03 = V3 Á = 4 K TR3 Bn Á
Ë R1 + R2 + R3 ˜¯ Ë R1 + R2 + R3 ˜¯
V1 = V2 = V3 = 6.326 ¥ 10–7 V
Substituting the value R1 = R2 = R3 = 500 W,
V01 = V02 = 2.108 ¥ 10–7 V; V03 = 4.217 ¥ 10–7 V
8.72 Probability Theory and Random Processes
V0 = V92 + V02
2
+ V03
2
8.57 Two resistors of 50 kW and 100 kW are at room temperature. For a bandwidth of 500 kHz, calculate
the thermal noise voltage generated by (a) each resistor, (b) the two resistors in series, and (c) the two
resistors in parallel.
V2 = 4 K T R2 Bn
VS = 4 K T Rs Bn = 34.65 mV
For parallel combination,
(100) (50)
Rp = R1 || R2 = = 33.33 kW
150
Vp = 4 K T R p Bn = 16.33 mV
8.58 An amplifier has three stages for which Te1 = 250 K, Te2 = 400 K and Te3 = 900 K. If the available
power gain of the second stage is 10, what gain must the first stage have to guarantee an effective input
noise temperature of 280 K.
Solution Given: Te1 = 250 K, Te2 = 400 K and Te3 = 900 K and G2 = 10
Te = 280
Te2 T
We have Te = Te1 + + e3
G1 G1G2
Linear Systems with Random Inputs 8.73
400 900
280 = 250 + +
G1 10G1
1
30 = [400 + 90]
G1
G1 = 400 = 16.33
30
8.59 An amplifier has a standard spot noise figure F0 = 10 dB. An engineer uses the amplifier to amplify
the output of antenna whose temperature Ts = 200 K
(a) What is the effective input noise temperature of the amplifier?
(b) What is the operating noise figure?
Practice Problem
8.11 The noise figure of an amplifier at room temperature is 0.5 dB. Find the equivalent temperature. (Ans. 35.38 K)
REVIEW QUESTIONS
16. Derive the relation between PSDs of input and output random process of an LTI system.
17. Discuss the significance of noise equivalent temperature of an electronic system.
18. Derive the expression for noise figure.
19. A Gaussian random process X(t) is applied to a stable linear filter. Show that the random process Y(t)
developed at the output of the filter is also Gaussian.
20. Explain the methods of finding the following properties of a system
(i) Linearity (ii) Time invariant (iii) Causality
21. Which of the following noise parameters is true representation of noise in electrical circuits?
(i) Noise figure (ii) Noise temperature (iii) Noise bandwidth
Support your answer with the help of suitable examples
22. Write short note on
(i) Noise spectral density
(ii) Noise figure
23. Explain available power of a noise source.
24. What are the important parameters that determine the overall noise figure of a multistage filtering?
25. For three stage amplifier, derive an expression for overall noise figure in terms of Gains and noise
figures of the individual amplifiers.
8.74 Probability Theory and Random Processes
EXERCISES
Problems
1. A signal x(t) = e–bt u(t) is applied to a network having impulse response h(t) = u(t). Find the system’s
response.
2. A rectangular pulse of amplitude A and duration T is applied to a system with impulse response
h(t) = u(t). Find y(t).
The rectangualr pulse is defined as
Ï A for 0 < t < T
x (t ) = Ì
Ó0 elsewhere
3. The impulse response of a system is
ÔÏte - t for t ≥ 0
h (t ) = Ì
ÔÓ 0 otherwise
Find the response of the network to the pulse
Ï A for 0 < t < T
x (t ) = Ì
Ó0 elsewhere
4. Repeat the exercise 3 if the network’s impulse response is
ÔÏt 2 e - t for t ≥ 0
h (t ) = Ì
ÔÓ 0 for t < 0
5. Find the transfer function of the network of Fig. 8.42.
Fig. 8.42
6. Find the transfer function of the network shown in Fig. 8.43.
Fig. 8.43
7. Determine which of the following impulse responses do not correspond to a system that is stable, or
realizable, or both, and state why?
(a) h(t) = u(t – 2) (b) h(t) = te–t u(t)
t –t
(c) h(t) = e u(–t) + e u(t) (d) h(t) = sin w0t
(e) h(t) = e–t2
8. A random process X(t) is applied to a linear time-invariant system. If the response Y(t) = X(t) –
X(t – T). Find the system’s transfer function.
Linear Systems with Random Inputs 8.75
Fig. 8.44
Multiple-Choice Questions
1. A system that obeys superposition principle is said to be a
(a) Causal system (b) Linear system
(c) Static system (d) Time-invariant system
2. The system y(t) = sin h[x(t)] is a
(a) Linear system (b) Time-invariant system
(c) Causal system (d) Non-causal system
3. The convolution integral is given by
• •
• •
(c) y(t ) = Ú x(t ) h(t + t ) dt (d) y(t ) = Ú x(t + t ) h(t ) dt
-• -•
8.76 Probability Theory and Random Processes
4. The system is said to be stable if its impulse response h(t) satisfies the condition
•
(a) h(t ) = 0 for t < 0 (b) Ú h(t ) dt = •
-•
•
(c) Ú h(t ) dt < • (d) h(t ) π 0 for t < 0
-•
Fig. 8.45
1 1
(a) jwRC + 1 (b) (c) jwRC (d)
1 + jw RC 1 - jw RC
6. Which of the following is/are correct?
(a) RYX(t) = RXX(t) * h(t) (b) RXY(t) = RXX(t) * h(t)
(c) RYX(t) = RXX(t) * h(–t) (d) RYY(t) = RXX(t) * h(t)
7. The noise bandwidth of a low-pass RC filter is
p p
(a) (b) (c) pRC (d) 2pRC
RC 2RC
8. Which of the following is/are correct?
To
(a) To = Te(Fo – 1) (b) Te = +1
Fo - 1
(c) Te = To (Fo – 1) (d) Te + To = Fo – 1