0% found this document useful (0 votes)
74 views

6.1 The Multivariate Normal Random Vector

1) The multivariate normal distribution (MND) describes any random vector that can be generated by a linear transformation of independent normal random variables. 2) Any linear transformation of independent normal variables results in a multivariate normal distribution. The mean and variance-covariance matrix of the transformed variables can be determined from the transformation. 3) The density function of a multivariate normal distribution depends on the mean vector and variance-covariance matrix through a quadratic form.

Uploaded by

Ezekiel Kakraba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

6.1 The Multivariate Normal Random Vector

1) The multivariate normal distribution (MND) describes any random vector that can be generated by a linear transformation of independent normal random variables. 2) Any linear transformation of independent normal variables results in a multivariate normal distribution. The mean and variance-covariance matrix of the transformed variables can be determined from the transformation. 3) The density function of a multivariate normal distribution depends on the mean vector and variance-covariance matrix through a quadratic form.

Uploaded by

Ezekiel Kakraba
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

6

THE MULTIVARIATE NORMAL


DISTRIBUTION
The Multivariate Normal Distribution (MND) is a family of distributions generated by linear
transformations, or linear vector functions, of independent normal random variables, of the
form
y  Ax  b , (6.1)
where A is k  m matrix of transformation coefficients, x  ( x1 , x2 , , xm ) ,
y  ( y1 , y2 , , yk ) , and b  (b1 , b2 , , bk ) .

6.1 The Multivariate Normal Random Vector


Theorem 6.1
Any linear transform of a vector of independent Normal random variables can be expressed as a
linear transform of independent standard Normal variables.
To see this, let x  ( X 1 , X 2 , , X m ) be independent variables with E( X i )  i and
var( X i )   i2 i  1, 2, , m . Standardizing gives
X i  i
Zi 
i
Then Z  (Z1 , Z 2 , , Z m ) is a vector of independent and identically distributed (iid) standard
Normal variables. Thus, X i   i Zi  i . Therefore, we can write
x  Cz  μ (6.2)
where C  diag(1 ,  2 , ,  m ) and μ  (1 , 2 , , m ) .

Thus, any linear transform in Equation (6.1) may be expressed as

y  Bz  d (6.3)

where B  AC and d  Aμ  b .
Equation (6.3) is a linear transform of standard Normal random variables.

Definition 6.1
A random vector X  ( X1 , X 2 , , X k ) is said to have a Multivariate Normal Distribution if x
is a linear transform of a vector Z  (Z1 , Z 2 , , Z m ) of independent standard Normal
variables
2 THE MULTIVARIATE NORMAL DISTRIBUTION

Thus, any multivariate Normal vector x can be written as

X  AZ  μ (6.4)

where A is k  m matrix of constants and z is m  1 vector of iid standard Normal random


variables. The mean vector and variance-covariance matrix of x are E(X)  μ and
D(X)  AA .

Exercise

Find E(X) and D(X) in the following linear transformation of the standard normal vector
Z  (Z1 , Z 2 , Z 3 )

X 1  Z1  2Z 2  Z 3  2

X 2  3Z1  2Z 2

Remarks 6.1

1. In Equation (6.4), rank(A) cannot exceed k.


2. If rank(A)  k it implies that the components of X are linearly dependent. The
variance-covariance matrix of X , Σ X , is then singular. In this case, X has a singular
multivariate Normal distribution.
3. If rank(A)  k , component variables of X are linearly independent. The matrix Σ X is
then non-singular.
4. If k  m , rank(A) which cannot exceed the min(k , m) is necessarily less than k, and
so X has a singular distribution. Therefore the distribution of X is singular wherever
k  m.
5. Let rank(A)  r , where r  k , which means that X is singular. Then X has a sub-
vector X (r ) (i.e. of r components) which has a non-singular distribution. It implies that
X \ X ( r ) , the components of X which are not in X (r ) , are each linearly dependent on
the components of X (r ) and their distribution and the distribution of the entire vector
X is then completely determined by X (r ) . Therefore, any singular multivariate Normal
distribution may be specified by an associated non-singular distribution.

Notation

A multivariate Normal distribution with mean vector μ and variance-covariance matrix Σ will
be designated as N (μ, Σ) k , where k is the dimension of the distribution.
3

6.2 Properties of the Multivariate Normal Distribution


The following are the properties of the multivariate Normal distributed random variable.

1. Linear transform
If Y  AX  b , where A is a p k matrix of constants, b is a constant vector, and X
~ N (μ, Σ) k then
(a) Y ~ N (Aμ  b, AΣA)
(b) If the distribution of X is non-singular and Y  (Y1 , Y2 , , Yp ) , then the
distribution of Y is also non-singular if and only if rank(A)  p

2. Independence of Component Variables


The component variables of X are mutually independent if and only if the variance-
covariance matrix Σ X is diagonal.

Proof
Denote Σ  ( ij )
If X i , i  1, 2,, k are mutually independent, then they are pair-wisely independent,
and pair-wisely uncorrelated. This implies that  ij  0 , i  j . Therefore, Σ is diagonal
Now suppose that Σ is diagonal..We note that
X  AZ  μ , where N 0, Ik and Σ  AA .
We use the property of multivariate moment generating functions (mmgf) to establish
the independence.
Now assume that Σ is diagonal and given as Σ  diag( 11 ,  22 , ,  kk )
Let X  ( X 1 , X 2 , , X k ) and Z  (Z1 , Z 2 , , Z m )
The mmgf of Z is given by

M z (t)  exp12 tt  where t  (t1 , t 2 , , t m ) and define s  (s1 , s2 , , sk ) . The mmgf
of X is given by

M X (s)  EexpAZ  μ)s

 e μs M AZ (s)

 e μs M Z (As)
Making substitutions and simplifying gives

M X (s)  e μs exp 12 sAAs 

 expμs  12 sΣ X s 

Substituting for μ and Σ X gives


4 THE MULTIVARIATE NORMAL DISTRIBUTION

 k k 
M X (s)  exp   j s j  12  jj s 2j 
 j 1 j 1 
  exp j s j  12  jj s 2j 
k

j 1

Now

 
M X j (s)  exp  j s j  12  jj s 2j implying that X j ~ N ( j ,  jj ) . Therefore,

k
M X (s)   M X j (s j )
j 1

Therefore, since the mmgf of X is the product of the mgfs of the components X j , it
means that X j j  1, 2, , k are independent.

6.3 Transformation of Independent Standard Normal Random Variables


Theorem 6.2
If X is a non-singular and distributed as N (μ, Σ) k then there exists a non-singular matrix S
such that
SS  Σ (6.5)
and
Z  S 1 (X  μ) (6.6)
is a vector of independent standard Normal random variables also of dimension k.

Proof

Since X ~ N (μ, Σ) k and non-singular, Σ is non-singular, positive definite and symmetric.


By the Spectral Decomposition Theorem for symmetric matrices, we can find a factorization
Σ  T Λ T , where T is an orthogonal matrix
Λ is diag(1 , 2 , , k ) of eigen-values of Σ , with  j  0, j since Σ is positive
definite.
Therefore, Λ factorizes as

Λ  diag( 1 , 2 , , k )  diag( 1 , 2 , , k )

 Λ 2Λ
1 1
2

Thus,
5

Σ  T Λ T

 (TΛ 2 )(Λ 2 T)


1 1

 SS

where S  TΛ 2 , which is non-singular since T is non-singular being orthogonal and Λ 2 is also


1 1

non-singular.

Now, given Z  S 1 (X  μ) , E(Z)  S 1 (E(X)  μ)  0


The variance-covariance matrix is given as

D(Z)  S 1 D(X  μ) (S 1 )
 S 1 D(X) (S 1 )
 S 1 Σ (S 1 )
 S 1 (SS) (S 1 )
 S 1S S (S) 1
 Ik

Thus, Z ~ N (0, I k ) , which is a vector of k independent standard Normal random variables.

6.4 The Multivariate Normal Density Function


We know that every multivariate Normal distribution can be determined by some non-singular
distribution. It is therefore sufficient to derive the density function of non-singular distributions.
Now following Theorem 6.2, define a non-singular transformation Z  X given by
X  AZ  μ with Equation (6.6) as the inverse. The Jacobian of the inverse transformation is
1 1
J Z (X)  det(S 1 )   1
det(S) Σ 2

The density of the vector Z is given as


k
 1 
f (z)    exp 12 zz 
 2 
Thus, the density of X is given as
 (x)  f (z)  J Z (x)
 
 f S 1 (X  μ)  J Z (x)
k
1  1 
 
1 
 
 exp  12 (x  μ)S 1S 1 (x  μ)
Σ  2 
2
6 THE MULTIVARIATE NORMAL DISTRIBUTION

Further simplification gives


k
1  1 
 (x)  1  
 exp  12 (x  μ)Σ1 (x  μ) ,  x  k (6.7)
Σ  2 
2

Thus, the density function of X depends on the variance-covariance matrix Σ through


quadratic form
Q(x)  (x  μ)Σ1 (x  μ) (6.8)
The quadratic form in Equation (6.8) is sufficient to identify the distribution of X.

Equation (6.8) is equivalent to


k k
Q(x)   aij ( xi  i )(x j   j ) (6.9)
i 1 j 1

where A  (aij )  Σ 1 a symmetric matrix.

Further expansion gives


k k
Q(x)   aii ( xi  i ) 2  2 aij ( xi  i )(x j   j ) (6.10)
j 1 i j

Example 6.1

Let X  ( X1 , X 2 , X 3 ) have multivariate Normal distribution with mean vector   (1, 0,  2)
and variance-covariance matrix

 2 0 2
 
Σ  0 1 1

 
 2 1 5
 

Find the density function of X.

Solution

The density function is given by Equation (6.7).

 4 2  2
 
We find that Σ  4 and Σ 1   2 6  2 .
1
4 
 2  2 2 

From Equation (6.10),


7

Q(x)  ( x1  1) 2  32 ( x2  0) 2  12 ( x3  2) 2  ( x1  1) x2  ( x1  1)(x3  2)  x2 ( x3  2)

This simplifies as

Q(x)  x12  32 x22  12 x32  x1 x2  x1 x3  x2 x3  4 x1  3x2  3x3  5

Therefore the density function of X is given as


3
1 1 
 (x)    exp 12 Q(x), x  3
2  2 

In Example (6.1), we are able to obtain the joint density function of ( X 1 , X 2 , X 3 ) (or the
quadratic form, Q(x) ) since the variance-covariance matrix and the mean vector were provided.
Conversely, it should be possible to derive the mean vector and variance-covariance matrix given
the quadratic form.

It has been noted that A  (aij )  Σ 1

where aii is the coefficient of xi2 in Q(x)


2aij coefficient of xi x j i, j  1, 2, , k
From Equation (6.10), the coefficient of xi is given by
k
ci  2aii  i  2 aij  j
i j
k
  aij  j
i j

which may be written as

c  2Aμ (6.11)

as the vector of coefficients of X.

The mean vector may then be obtained as

1
μ   Σc (6.12)
2

For example, given the quadratic form in the solution to Example 6.1, obtain the mean vector
and variance-covariance matrix.
8 THE MULTIVARIATE NORMAL DISTRIBUTION

Review Exercise 6

1. Find the mean vector and variance-covariance matrix of the multivariate Normal distribution
with density  (x)  K exp 12 Q(x), if

Q(x)  x12  32 x22  x32  x1 x2  x1 x3  3x2 x3  4x1  3x2  3x3  5

2. Let X  ( X1 , X 2 , X 3 ) have multivariate Normal distribution with mean vector


μ  (3, 10, 13) and variance-covariance matrix

5 0 5 
 
Σ 0 5 5
1
2 
 5 5 10
 
Find the density function of X.

3. The components of the random variable X  ( X1 , X 2 , , X 6 )' are defined as follows:


X1  5Z1  2Z2  Z3  5Z4  2
X 2  Z1  Z2  3Z3  Z4  1
X 3  2Z1  4Z2  2Z3  3Z4  5
X 4  3Z1  5Z2  Z3  4Z4  1
X 5  Z1  3Z2  5Z3  2Z4
X 6  3Z1  2Z2  3Z3  2Z4  4
where Z1, Z2 , Z3 , Z4 are independent standard normal random variables.

Let X be partitioned as X1  ( X 1 , X 2 , X 3 )' and X2  ( X 4 , X 5 , X 6 )' .

(a) Express X 2 as a linear transformation


X2  BX1  c
and identify B and c.
[Hint: e.g. X 6  X1  X 3  3 ]
(b) Hence, show that X may be written as
X  CX1  b
(c) Provide two comments about the distribution of X ?
(d) Find the distribution of X1 .

4. Let X ~ N (0, I) 3 and that Y  (Y1 , Y2 , Y3 ) is given by the transformation


9

y1  x1  2 x2  x3
y 2  3x1  2 x2  2 x3
y3  2 x1  x2  4 x3

Find
(a) the multivariate moment generating function of Y;
(b) the probability density function for Y.
(c) Comment on your results in (a) and (b).

REFERENCES

1 Hogg, R. V. & Craig, A. T. Itroduction to Mathematical Statistics. (2nd ed.) Macmillan


2 Howard, N. K., & Nkansah, B. K. (2016). Introduction to Probability Theory. Accra, Ghana
Universities Press. pp 160
3 Morrison, D. F. Multivariate Statistical Methods, McGraw Hill

You might also like