0% found this document useful (0 votes)
4 views

Lecture 4

The document discusses the Rao-Blackwell Theorem, which states that given a random sample and an unbiased estimator, one can derive another unbiased estimator that is a function of sufficient statistics with no larger variance. It provides proofs and examples, including a problem involving Bernoulli and Poisson distributions to illustrate the theorem's application. The document concludes that the derived estimator is uniformly better than the original estimator.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Lecture 4

The document discusses the Rao-Blackwell Theorem, which states that given a random sample and an unbiased estimator, one can derive another unbiased estimator that is a function of sufficient statistics with no larger variance. It provides proofs and examples, including a problem involving Bernoulli and Poisson distributions to illustrate the theorem's application. The document concludes that the derived estimator is uniformly better than the original estimator.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Lecture 4

Rao-Blackwell Theorem
Let 𝑋1 , … … … … . , 𝑋𝑛 be a random sample from the density 𝑓(. ; 𝜃), and let
𝑆1 = 𝑠1 (𝑋1 , … … … … . , 𝑋𝑛 ), … … … . . , 𝑆𝑘 = 𝑠𝑘 (𝑋1 , … … … … . , 𝑋𝑛 ) be a set of jointly
sufficient statistics. Let the statistic 𝑇 = 𝑡(𝑋1 , … … … … . , 𝑋𝑛 ) be an unbiased
estimator of 𝜏(𝜃). Define 𝑇 ′ by 𝑇 ′ = 𝐸[𝑇/𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 ].
Then,
(i) 𝑇 ′ is a statistic, and it is a function of the sufficient statistics 𝑆1 , … . . , 𝑆𝑘 .
Write 𝑇 ′ = 𝑡 ′ (𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 ).
(ii) 𝐸[𝑇 ′ ] = 𝜏(𝜃); that is, 𝑇 ′ is an unbiased estimator of 𝜏(𝜃).
(iii) 𝑣𝑎𝑟[𝑇 ′ ] ≤ 𝑣𝑎𝑟[𝑇] for every 𝜃, and var[𝑇 ′ ] < 𝑣𝑎𝑟[𝑇] for some 𝜃 unless
T is equal to 𝑇 ′ with probability 1.
Proof: (i) 𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 are sufficient statistics; so the conditional distribution of
any statistic, in particular the statistic T, given 𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 is independent of 𝜃;
hence 𝑇 ′ = 𝐸[𝑇/𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 ] is independent of 𝜃, and so 𝑇 ′ is a statistic which
is obviously a function of 𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 .
(ii) 𝐸[𝑇 ′ ] = 𝐸[𝐸[𝑇/𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 ]] = 𝐸[𝑇] = 𝜏(𝜃) [since E(Y)=E[E(Y/X)]
that is, 𝑇 ′ is an unbiased estimator of 𝜏(𝜃).
(iii) We can write
var[T]= 𝐸[(𝑇 − 𝐸[𝑇 ′ ])2 ] = 𝐸[(𝑇 − 𝑇 ′ + 𝑇 ′ − 𝐸[𝑇 ′ ])2 ]
= 𝐸[(𝑇 − 𝑇 ′ )2 ] + 2𝐸[(𝑇 − 𝑇 ′ )(𝑇 ′ − 𝐸[𝑇 ′ ])] + 𝑣𝑎𝑟[𝑇 ′ ]
But
𝐸[(𝑇 − 𝑇 ′ )(𝑇 ′ − 𝐸[𝑇 ′ ])] = 𝐸[𝐸[(𝑇 − 𝑇 ′ )(𝑇 ′ − 𝐸[𝑇 ′ ])/𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 ]]
and
𝐸[(𝑇 − 𝑇 ′ )(𝑇 ′ − 𝐸[𝑇 ′ ])/𝑆1 = 𝑠1 , … … . . , 𝑆𝑘 = 𝑠𝑘 ]
= {𝑡 ′ (𝑠1 , … … … , 𝑠𝑘 ) − 𝐸[𝑇 ′ ]}𝐸[(𝑇 − 𝑇 ′ )/𝑆1 = 𝑠1 , … … . . , 𝑆𝑘 = 𝑠𝑘 ]
= {𝑡 ′ (𝑠1 , … … … , 𝑠𝑘 ) − 𝐸[𝑇 ′ ]}(𝐸[𝑇/𝑆1 = 𝑠1 , … … . . , 𝑆𝑘 = 𝑠𝑘 ]
−𝐸[𝑇 ′ /𝑆1 = 𝑠1 , … … . . , 𝑆𝑘 = 𝑠𝑘 ])
= {𝑡 ′ (𝑠1 , … … … , 𝑠𝑘 ) − 𝐸[𝑇 ′ ]}[𝑡 ′ (𝑠1 , … … … , 𝑠𝑘 ) − 𝑡 ′ (𝑠1 , … … … , 𝑠𝑘 )]
=0
and therefore
𝑣𝑎𝑟[𝑇] = 𝐸[(𝑇 − 𝑇 ′ )2 ] + 𝑣𝑎𝑟[𝑇 ′ ] ≥ 𝑣𝑎𝑟[𝑇 ′ ]
Note that 𝑣𝑎𝑟[𝑇] > 𝑣𝑎𝑟[𝑇 ′ ] unless T equals 𝑇 ′ with probability 1.

For many applications (particularly where the density involved has only one
unknown parameter) there will exist a single sufficient statistic, say 𝑆 =
𝑠(𝑋1 , … … … … . , 𝑋𝑛 ), which would then be used in place of the jointly sufficient set
of statistics 𝑆1 , 𝑆2 , … … . . , 𝑆𝑘 . What the theorem says is that, given an unbiased
estimator, another unbiased estimator that is a function of sufficient statistics can be
derived and it will not have larger variance.

Problem
Let 𝑋1 , 𝑋2 , … … … . , 𝑋𝑛 be a random sample from the Bernoulli density 𝑓(𝑥; 𝜃) =
𝜃 𝑥 (1 − 𝜃)1−𝑥 for x= 0 or 1. Let 𝑋1 is an unbiased estimator of 𝜏(𝜃) = 𝜃. Let ∑ 𝑋1
is a sufficient statistic. Show that 𝐸[𝑋1 / ∑ 𝑋1 ] is a statistic and an unbiased estimator
of 𝜃 with no larger variance than 𝑇 = 𝑋1 .

Solution:
We first find the conditional distribution of 𝑋1 given ∑ 𝑋1 = 𝑠. 𝑋1 takes on at most
the two values 0 and 1.

𝑃[𝑋1 = 0; ∑𝑛𝑖=1 𝑋1 = 𝑠 ] 𝑃[𝑋1 = 0; ∑𝑛𝑖=2 𝑋1 = 𝑠 ]


𝑃 [𝑋1 = 0/ ∑ 𝑋1 = 𝑠] = =
𝑃[∑𝑛𝑖=1 𝑋1 = 𝑠 ] 𝑃[∑𝑛𝑖=1 𝑋1 = 𝑠 ]
𝑃[𝑋1 =0]𝑃 ∑𝑛 (1−𝜃)(𝑛−1
𝑖=2 𝑋1 =𝑠 ]
𝑠
𝑠 )𝜃 (1−𝜃)
𝑛−1−𝑠
𝑛−𝑠
= 𝑛 = 𝑛 𝑠 𝑛−𝑠
=
𝑃[∑𝑖=1 𝑋1 =𝑠 ] ( 𝑠 )𝜃 (1−𝜃) 𝑛
𝑛
𝑃[𝑋1 = 1; ∑𝑖=1 𝑋1 = 𝑠 ]
𝑃 [𝑋1 = 1/ ∑ 𝑋1 = 𝑠] =
𝑃[∑𝑛𝑖=1 𝑋1 = 𝑠 ]
𝑃[𝑋1 = 1; ∑𝑛𝑖=2 𝑋1 = 𝑠 − 1 ]
=
𝑃[∑𝑛𝑖=1 𝑋1 = 𝑠 ]
𝑃[𝑋1 =1]𝑃 ∑𝑛
𝑖=2 𝑋1 =𝑠−1 ]
𝜃(𝑛−1)𝜃𝑠−1 (1−𝜃)𝑛−1−𝑠+1 𝑠
= 𝑛 = 𝑠−1𝑛 𝑠 𝑛−𝑠
=
𝑃[∑𝑖=1 𝑋1 =𝑠 ] ( 𝑠 )𝜃 (1−𝜃) 𝑛

Thus, the conditional distribution of 𝑋1 given ∑ 𝑋1 = 𝑠 is independent of 𝜃. Thus,


∑ 𝑋1 = 𝑠 is a sufficient statistic.

Now
𝑛−𝑠 𝑠 𝑠
𝐸 [𝑋1 / ∑ 𝑋𝑖 = 𝑠] = 0. + 1. =
𝑛 𝑛 𝑛
hence,
′ ∑𝑛
𝑖=1 𝑋𝑖
𝑇 = is a statistic
𝑛
𝜃(1−𝜃)
The variance of 𝑋1 is 𝜃(1 − 𝜃), and the variance of 𝑇 ′ is ; so for n>1 the
𝑛
variance of 𝑇 ′ is actually smaller than the variance of 𝑇 = 𝑋1 .

Problem: Let 𝑋1 and 𝑋2 be independent identically distributed (iid) Poisson (𝜃)


random variables.

(a) Find a sufficient statistic for 𝜃.


1 𝑖𝑓 𝑋1 = 0
(b) Show that 𝑊 = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
is an unbiased estimator of 𝜏(𝜃) = 𝑒 −𝜃 .
(c) Compute 𝐸[𝑊/𝑋1 + 𝑋2 = 𝑦]
(d) For the estimator W in part (b), find a uniformly better unbiased estimator of 𝑒 −𝜃

Solution: The joint pmf of 𝑋1 and 𝑋2 is


𝜃 𝑥1 𝑒 −𝜃 𝜃 𝑥2 𝑒 −𝜃 𝜃 𝑥1+𝑥2 𝑒 −2𝜃
𝑓(𝑥1 , 𝑥2 /𝜃) = 𝑓(𝑥1 /𝜃)𝑓(𝑥2 /𝜃) = =
𝑥1 ! 𝑥2 ! 𝑥1 ! 𝑥2 !
= 𝑔(𝑥1 + 𝑥2 /𝜃)ℎ(𝑥1 , 𝑥2 )
Thus, 𝑋1 + 𝑋2 is sufficient for 𝜃.

𝜃 0 𝑒 −𝜃
(b) 𝐸[𝑊] = 𝑃(𝑊 = 1) = 𝑃(𝑋1 = 0) = = 𝑒 −𝜃
0!
(c) Since 𝑋1 + 𝑋2~ Poisson (2𝜃), we have
𝐸[𝑊/𝑋1 + 𝑋2 = 𝑦] = ∑ 𝑊 𝑃(𝑊 = 1/𝑋1 + 𝑋2 = 𝑦) = 𝑃(𝑊 = 1/𝑋1 + 𝑋2 = 𝑦)
= 𝑃(𝑋1 = 0/𝑋1 + 𝑋2 = 𝑦)
𝑃(𝑋1 =0 𝑎𝑛𝑑 𝑋1 +𝑋2 =𝑦)
=
𝑃(𝑋1 +𝑋2 =𝑦)
𝑃(𝑋1 =0 𝑎𝑛𝑑 𝑋2 =𝑦)
=
𝑃(𝑋1 +𝑋2 =𝑦)
𝑃(𝑋1 =0) 𝑃(𝑋2 =𝑦)
=
𝑃(𝑋1 +𝑋2 =𝑦)
𝑒−𝜃 𝜃𝑦
𝑒 −𝜃 ( )
𝑦!
=
𝑒 −2𝜃 (2𝜃)𝑦 /𝑦!
𝜃𝑦 1 𝑦
= =( )
(2𝜃)𝑦 2
(d) Since W is an unbiased estimator of 𝑒 −𝜃 and 𝑋1 + 𝑋2 is sufficient for 𝜃 (and
consequently 𝑒 −𝜃 ), the Rao-Blackwell Theorem implies that

1 𝑋1 +𝑋2
𝑇 ′ (𝑋1 + 𝑋2 ) = 𝐸[𝑊/𝑋1 + 𝑋2 ] = ( )
2
is better uniformly unbiased estimator of 𝑒 −𝜃 .

You might also like