0% found this document useful (0 votes)
172 views

Minimum Variance Unbiased Estimators

This document discusses minimum variance unbiased estimators and the Cramer-Rao bound. It provides two examples - one where there is no unbiased estimator, and another where the minimum variance unbiased estimator has variance higher than the Cramer-Rao bound. It then states a result that an unbiased estimator with variance equal to the Cramer-Rao bound exists if and only if the score function can be written as the derivative of the log-likelihood with respect to the parameter minus the parameter. The minimum variance unbiased estimator is then the maximum likelihood estimator.

Uploaded by

OIB
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
172 views

Minimum Variance Unbiased Estimators

This document discusses minimum variance unbiased estimators and the Cramer-Rao bound. It provides two examples - one where there is no unbiased estimator, and another where the minimum variance unbiased estimator has variance higher than the Cramer-Rao bound. It then states a result that an unbiased estimator with variance equal to the Cramer-Rao bound exists if and only if the score function can be written as the derivative of the log-likelihood with respect to the parameter minus the parameter. The minimum variance unbiased estimator is then the maximum likelihood estimator.

Uploaded by

OIB
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Lecture 3

Minimum Variance Unbiased


Estimators

The Cramer–Rao bound gives a lower bound for the variance of unbiased
estimators. In some sense this is helpful only if we can find an unbiased
estimator with variance equal to this bound. If that is the case we know
this is the minimum variance unbiased estimator. If not, there are two
possibilities. Either we missed the minimum variance unbiased estimator, or
we have an minimum variance unbiased estimator with variance larger than
the bound. In many cases a minimum variance unbiased estimator does not
even exist. To demonstrate some of these possibilities, consider the following
examples. We have already seen an example where the bound does not apply.

Example
√ 3.1. Suppose X has a binomial distribution with parameters 1
and θ. Any estimator for θ can be written as

W = W (X) = W (0) + (W (1) − W (0)) · X = α + β · X.

Its expectation is for any α and β equal to



α + β · θ.

There are no α and β that make this equal to θ, and so there is no unbiased
estimator for θ, let alone one that achieves the Cramer–Rao bound.

Example 3.2. X1√and X2 are independent binomial random variable with


parameters 1 and θ:
√ √
fX (x|θ) = ( θ)x · (1 − θ)1−x .

What is the Cramer–Rao bound? The log of the density is


x √
ln fX (x|θ) = · ln θ + (1 − x) · ln(1 − θ).
2

1
Statistics II 2017
Marcelo J. Moreira FGV/EPGE

The derivative, or the score function, is



∂ ln fX x 1−x 1 x− θ
(x|θ) = − √ · √ = √ .
∂θ 2θ 1 − θ 2 θ 2θ(1 − θ)
The score function has expectation zero.
 √ 2 
x− θ 1
J =E √ = √ √ ,
2θ(1 − θ) 4θ θ(1 − θ)
and the CR bound with n = 2 is
1 √ √
CR = = 2θ θ(1 − θ).
nJ
Now consider estimators for θ. Any estimator can be written as
W = a0 + a1 · X1 + a2 · X2 + a3 · X1 · X2 ,
with expectation
√ √
E[W ] = a0 + a1 · θ + a2 · θ + a3 · θ.
Unbiased estimators must have a0 = 0, a3 = 1 and a2 = −a1 . So all unbiased
estimators have the following form
W = a1 (X1 − X2 ) + X1 · X2 .
and
V(W ) = a21 V(X1 − X2 ) + 2a1 C(X1 − X2 , X1 X2 ) + V(X1 X2 )
= a21 V(X1 − X2 ) + V(X1 X2 )
The unbiased estimator with lowest variance has a1 = 0, ie W = X1 X2 is
the minimum variance unbiased estimator for θ. But W = X1 X2 has mean θ
and variance θ(1 − θ), which is strictly higher than the Cramer–Rao bound.
Nevertheless, it is the minimum variance unbiased estimator.
Now let us investigate when we have an unbiased estimator with variance
equal to the Cramer–Rao bound. In that case we must have, in the notation
of the proof of the CR bound, that the correlation of the score S and the
estimator W is equal to one in absolute value. Hence it must be the case
that the score is a linear function of W , with coefficients possibly depending
on θ:
∂ ln f
(X; θ) = a(θ) · W (X) + b(θ).
∂θ
Because W is unbiased, or E[W ] = θ, it must be that b(θ) = −a(θ) · θ, so
we must be able to write the score function as
∂ ln f
(X; θ) = a(θ) · (W (X) − θ).
∂θ
It turns out that this is both sufficient and necessary for the existence of an
unbiased estimator with variance equal to the Cramer–Rao bound.

2
Statistics II 2017
Marcelo J. Moreira FGV/EPGE

Result 3.3. An unbiased estimator with variance equal to the Cramer–Rao


bound exists if and only if the score function can be written as
∂ ln f
(X; θ) = a(θ) · (W (X) − θ),
∂θ
for some function W (X). The minimum variance unbiased estimator is then
equal to the maximum likelihood estimator W (X) = θ̂M L .
Proof. We have already proven that the existence of an MVUE with vari-
ance equal to the CR bound implies the above characterization of the score
function. Now let us consider the “if" part of the result.
Suppose we can write the score as
∂ ln f
(X; θ) = a(θ) · (W (X) − θ).
∂θ
Because the score function has expectation zero, W (X) is an unbiased esti-
mator. Its variance is equal to the variance of the score function divided by
a(θ)2 , " 2 #
1 ∂
V(W ) = E ln f (X; θ)
a(θ)2 ∂θ
At the same time, by the information matrix equality the expected second
derivative of the log of the density is also equal to minus the expectation
of the square of the first derivative of the log of the density. The second
derivative is equal to
 2 
∂ ln f
E (X; θ) = E[a0 (θ) · (W (X) − θ) − a(θ)] = −a(θ).
∂θ2
Hence " 2 #

E ln f (X; θ) = a(θ),
∂θ
implying that
1
V (W (X)) = 1/a(θ) = h 2 i = CR

E ∂θ ln f (X; θ)

Finally, by setting the derivative of the log of the density equal to zero,
combined with a negative second derivative, we have maximized the log of
the density, or the log likelihood and so under these conditions the minimum
variance unbiased estimator W (X) is equal to the maximum likelihood esti-
mator.

Problems

3
Statistics II 2017
Marcelo J. Moreira FGV/EPGE

1. [3] Suppose θb is UMVU (uniformly minimum variance unbiased) for


estimating θ. Let a and b be constants. Show that λ
b = a + bθb is
UMVU for estimating λ = a + bθ.

2. [3] Consider the exponential family model:

f (x; θ) = C (θ) exp (θ.T (x)) h (x) ,

Is there a UMVU (uniformly minimum variance unbased) estimator of


θ? Prove it or give a counter-example.

3. [3] Let W1 , ..., Wn be unbiased estimators of a parameter θ with V (Wi ) =


σi2 and C (Wi , Wj ) = 0 if i 6= j.
Pn
(a) Show that, of all unbiased estimators of the form i=1 ai Wi ,
where the a0i s are constants, the estimator
Pn
Wi /σi2
W ∗ = Pi=1 n 2
i=1 1/σi

has minimum variance.


(b) Show that
1
V (W ∗ ) = Pn 2.
i=1 1/σi

4. [3] Let W (X) be an unbiased estimator of θ.

(a) Show that if W (X) is MVUE (minimum variance unbiased esti-


mator), then it is unique.
(b) Show that W (X) is MVUE if and only if it is uncorrelated with
all unbiased estimators U of zero (i.e., Eθ U = 0 for any θ).

5. [0, 3, 10] Suppose that Xt = θ + Ut , where Ut = Vt + ρVt−1 and Vt are


iid random variables with finite first four moments (E |Vt |j < ∞ for
j = 1, ..., 4) and EVt = 0.

(a) Apply a CLT to find the limiting distribution of n(X n − θ).
(b) Suppose that ρ is known. Find the minimum variance linear un-
biased estimator (MVLUE) θbn of θ.

(c) Apply a CLT to find the limiting distribution of n(θbn − θ) and
compare with item (a). Explain your answer.

You might also like