0% found this document useful (0 votes)
31 views

Mean and Autocorrelation Functions

The document discusses several key concepts: 1) It defines mean and autocorrelation functions which characterize the first and second order properties of random vectors and processes. 2) It provides examples of mean and autocorrelation functions for important random processes like the IID process, random phase signal process, random walk, and Poisson process. 3) It introduces the Gaussian random process as a process where any joint distribution of the random variables is Gaussian, and defines them based on their mean and autocorrelation functions.

Uploaded by

gop
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Mean and Autocorrelation Functions

The document discusses several key concepts: 1) It defines mean and autocorrelation functions which characterize the first and second order properties of random vectors and processes. 2) It provides examples of mean and autocorrelation functions for important random processes like the IID process, random phase signal process, random walk, and Poisson process. 3) It introduces the Gaussian random process as a process where any joint distribution of the random variables is Gaussian, and defines them based on their mean and autocorrelation functions.

Uploaded by

gop
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

• Random telegraph process: A random telegraph process Y (t), t ≥ 0, assumes

values of +1 and −1 with Y (0) = +1 with probability 1/2 and −1 with


probability 1/2, and
Y (t) changes polarities with each event of a Poisson process with rate λ > 0
Sample path:
y(t)

+1

t
0 x1 x2 x3 x4 x5
−1

EE 278B: Random Processes 6 – 23

Mean and Autocorrelation Functions

• For a random vector X the first and second order moments are
◦ mean µ = E(X)
◦ correlation matrix RX = E(XXT )
• For a random process X(t) the first and second order moments are
◦ mean function: µX (t) = E(X(t)) for t ∈ T
! "
◦ autocorrelation function: RX (t1, t2) = E X(t1)X(t2) for t1, t2 ∈ T
• The autocovariance function of a random process is defined as
#! "! "$
CX (t1, t2) = E X(t1) − E(X(t1)) X(t2) − E(X(t2))
The autocovariance function can be expressed using the mean and
autocorrelation functions as
CX (t1, t2) = RX (t1, t2) − µX (t1)µX (t2)

EE 278B: Random Processes 6 – 24


Examples

• IID process:
µX (n) = E(X1) %
E(X12) n1 = n2
RX (n1, n2) = E(Xn1 Xn2 ) =
(E(X1))2 n1 ̸= n2
• Random phase signal process:
& 2π
α
µX (t) = E(α cos(ωt + Θ)) = cos(ωt + θ) dθ = 0
0 2π
! "
RX (t1, t2) = E X(t1)X(t2)
& 2π 2
α
= cos(ωt1 + θ) cos(ωt2 + θ) dθ
0 2π

α2 #
&
$
= cos(ω(t1 + t2) + 2θ) + cos(ω(t1 − t2)) dθ
0 4π
α2
= cos(ω(t1 − t2))
2

EE 278B: Random Processes 6 – 25

• Random walk:
n
'( ) n
(
µX (n) = E Zi = 0 = 0
i=1 i=1

RX (n1, n2) = E(Xn1 Xn2 )


= E [Xn1 (Xn2 − Xn1 + Xn1 )]
= E(Xn21 ) = n1 assuming n2 ≥ n1
= min{n1 , n2} in general

• Poisson process:
µN (t) = λt
RN (t1, t2) = E(N (t1)N (t2))
= E [N (t1)(N (t2) − N (t1) + N (t1))]
= λt1 × λ(t2 − t1) + λt1 + λ2 t21 = λt1 + λ2t1t2 assuming t2 ≥ t1
= λ min{t1, t2} + λ2t1t2

EE 278B: Random Processes 6 – 26


Gaussian Random Processes

• A Gaussian random process (GRP) is a random process X(t) such that


[X(t1), X(t2), . . . , X(tn) ]T
is a GRV for all t1, t2, . . . , tn ∈ T
• Since the joint pdf for a GRV is specified by its mean and covariance matrix, a
GRP is specified by its mean µX (t) and autocorrelation RX (t1, t2) functions
• Example: The discrete time WGN process is a GRP

EE 278B: Random Processes 6 – 27

Gauss-Markov Process

• Let Zn , n ≥ 1, be a WGN process, i.e., an IID process with Z1 ∼ N (0, N )


The Gauss-Markov process is a first-order autoregressive process defined by
X1 = Z1
Xn = αXn−1 + Zn , n > 1,
where |α| < 1
• This process is a GRP, since X1 = Z1 and Xk = αXk−1 + Zk where Z1, Z2, . . .
are i.i.d. N (0, N ),
⎡ ⎤ ⎡ ⎤⎡ ⎤
X1 1 0 ··· 0 0 Z1
⎢ X2 ⎥ ⎢ α
⎢ ⎥ ⎢ . 1 · · · 0 0 ⎥ ⎢ Z2 ⎥
⎢ X3 ⎥ = ⎢ . .. . . . .. .. ⎥ ⎢ ⎥
⎥ ⎢ Z3 ⎥
⎢ . ⎥ ⎢ n−2
αn−3 · · · 1 0 ⎦ ⎣ .. ⎦
⎥⎢ ⎥
⎣ . ⎦ ⎣α
Xn αn−1 αn−2 · · · α 1 Zn
is a linear transformation of a GRV and is therefore a GRV
• Clearly, the Gauss-Markov process is Markov. It is not, however, an independent
increment process

EE 278B: Random Processes 6 – 28


• Mean and covariance functions:
µX (n) = E(Xn) = E(αXn−1 + Zn)
= α E(Xn−1) + E(Zn) = α E(Xn−1) = αn−1 E(Z1) = 0
To find the autocorrelation function, for n2 > n1 we write
n2 −n
(1−1
n2 −n1
X n2 = α X n1 + αiZn2−i
i=0

Thus
RX (n1, n2) = E(Xn1 Xn2 ) = αn2−n1 E(Xn21 ) + 0 ,
since Xn1 and Zn2−i are independent, zero mean for 0 ≤ i ≤ n2 − n1 − 1
Next, to find E(Xn21 ), consider
E(X12) = N
E(Xn21 ) = E (αXn1−1 + Zn1 )2 = α2 E(Xn21−1 ) + N
# $

Thus
1 − α2n1
E(Xn21 ) = N
1 − α2

EE 278B: Random Processes 6 – 29

Finally the autocorrelation function is

1 − α2 min{n1,n2}
RX (n1, n2) = α|n2−n1| N
1 − α2
• Estimation of Gauss-Markov process: Suppose we observe a noisy version of the
Gauss-Markov process,
Yn = X n + W n ,
where Wn is a WGN process independent of Zn with average power Q
We can use the Kalman filter from Lecture Notes 4 to estimate Xi+1 from Y i
as follows:
Initialization:
X̂1|0 = 0
2
σ1|0 =N

EE 278B: Random Processes 6 – 30


Update: For i = 2, 3, . . . ,
X̂i+1|i = αX̂i|i−1 + ki(Yi − X̂i|i−1),
2 2
σi+1|i = α(α − ki)σi|i−1 + N, where
2
ασi|i−1
ki = 2
σi|i−1 +Q

Substituting from the ki equation into the MSE update equation, we obtain

2
α2Qσi|i−1
2
σi+1|i = 2 + N,
σi|i−1 +Q
This is a Riccati recursion (a quadratic recursion in the MSE) and has a steady
state solution:
α2Qσ 2
σ2 = 2 +N
σ +Q
Solving this quadratic equation, we obtain
0
2 N − (1 − α2)Q + 4N Q + (N − (1 − α2)Q)2
σ =
2

EE 278B: Random Processes 6 – 31

The Kalman gain ki converges to


0
−N − (1 − α2)Q + 4N Q + (N − (1 − α2)Q)2
k=
2αQ
and the steady-state Kalman filter is
X̂i+1|i = αX̂i|i−1 + k(Yi − X̂i|i−1)

EE 278B: Random Processes 6 – 32

You might also like