Lecture 21
Lecture 21
Definition 1. Estimator: Any function of the random sample which is used to estimate the
unknown value of the given parametric function g(θ) is called an estimator. If X = X1 , . . . , Xn is
a random sample from a population with common distribution function Fθ , a function t(X) used
for estimating g(θ) is known as an estimator. Let x = x1 , · · · , xn be a realization of X. Then, t(x)
is called an estimate.
For example, in estimating the average height of male students in a class, we may use the sample
mean X̄ as an estimator. Now, if a random sample of size 20 has a sample mean 170cm, then
170cm is an estimate of the average height of male students of that class.
Parameter Space: The set of all possible values of a parameter(s) is called parameter space. It
is denoted by Θ.
1
Definition 4. The quantity Eθ (t(X) − θ)2 is called the mean square error (MSE) of t(X) about θ.
Example 5. Let X1 , · · · , Xn be a random sample from binomial distribution with parameters n and
p, where, n is known and 0 ≤ p ≤ 1. Find unbiased estimators for a) p, the population proportion,
b) p2 c) Variance of X.
Solution: a) Given that X follows binomial(n, p), n is known and p, the population proportion is
unknown. Let t(X) = Xn , the sample proportion. Now,
X np
E(t(X)) = E = = p.
n n
Remark 7. 1. The unbiased estimator need not be unique. For example, let X1 , · · · , Xn be
a random sample form Poisson distribution with parameter λ, λ > 0. Then, t1 (X) = X̄,
t2 (X) = Xi , t3 (X) = X1 +2X
3
2
are some unbiased estimators for λ.
2. If E(X) exists, then the sample mean is an unbiased estimator of the population mean.
1 Pn
3. Let E(X 2 ) exists, i.e. V ar(X) = σ 2 exists. Then, S 2 = n−1 2
i=1 (Xi − X̄) is unbiased for
σ 2 .(Prove!)
4. Unbiased estimators may not always exist. For example, X follows binomial distribution with
parameters n and p. Then, there exists no unbiased estimator for pn+1 .(Prove!)
5. Unbiased estimators may not be reasonable always. They may be absurd. For example t(X) =
(−2)X is an absurd unbiased estimator for e−3λ , where, X follows Poisson distribution with
parameter λ. (Why?)
2
It is intuitively clear that for tn (X)(= t(X)) to be a good estimator the difference tn − θ should
be as small as possible. However, tn is a random variable and has its own sampling distribution
whose range may be infinitely large. Therefore, it would be sufficient if the sampling distribution
of tn becomes more and more concentrated around θ as the sample size n increases. This means
that for each fixed θ ∈ Θ, the probability
Pθ [|Tn − θ| ≤ ]
for any given (> 0) should be an increasing function of n. This idea leads to the concept of
consistency as a criterion of a good estimator.
If such statistics are used, the accuracy of the estimate increases with the increase in the value
of n. It is to be noted that consistency is a large sample property as it is concerned with the
behavior of an estimator as the sample size becomes infinitely large.
Example 9. Let X1 , · · · , Xn be a random sample from a population with mean µ and variance σ 2 .
Then,
V ariance(X̄) σ2
P (|X̄ − µ| > ) ≤ = → 0 as n → ∞.
2 n2
Hence, X̄ is consistent for µ.
Example 10. Let X1 , · · · , Xn be a sequence of independently and identically distributed (iid) ran-
dom variables with mean µ, then by weak law of large numbers (WLLN), X̄ is consistent for µ.
Example 11. Let {Xn } be a sequence of iid random variables with pdf
(
e−(x−θ) , x > θ
f (x, θ) =
0, otherwise
Remark 12. 1. If population mean exists, sample mean is consistent for the population mean.
2. The consistent estimator may not be unique. For example, if tn is consistent for θ, then,
n n+2
n+1 tn , n+4 tn are all consistent for θ.
3
Theorem 13. Let {tn } be a sequence of estimates such that for every θ ∈ Θ, the expectation and
variance of tn exist and E(tn ) = θn → θ and V (tn ) → 0 as n → ∞. Then, tn is consistent for θ.
Theorem 14. If t is consistent for θ and h is a continuous function of θ. Then, h(t) is consistent
for h(θ).