0% found this document useful (0 votes)
2 views

08.02.How to Generate an Estimator

The document discusses the process of generating estimators for statistical inference, focusing on the principle of maximum likelihood estimation (MLE). It explains how to calculate MLE for both continuous and discrete data, providing examples for each case. The document emphasizes the importance of maximizing the likelihood function based on observed samples to estimate parameters accurately.

Uploaded by

batoulkhodor012
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

08.02.How to Generate an Estimator

The document discusses the process of generating estimators for statistical inference, focusing on the principle of maximum likelihood estimation (MLE). It explains how to calculate MLE for both continuous and discrete data, providing examples for each case. The document emphasizes the importance of maximizing the likelihood function based on observed samples to estimate parameters accurately.

Uploaded by

batoulkhodor012
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Pre-foundation Statistics

Abbas Alhakim

Statistical Inference: Point Estimation


Part II
How to Generate an Estimator?
How to generate an estimator

Key Idea: In Some cases it might be difficult to guess any reasonable estimator of
a parameter 𝜃.
Suppose a numerical random sample 𝑋1 , ⋯ , 𝑋𝑛 is to be collected from a
population with common pdf 𝑓 𝑥; 𝜃 .
Suppose, further, that a specific sample is collected and gives the values 𝑘1 , ⋯ , 𝑘𝑛 .
The joint density of 𝑋1 , ⋯ , 𝑋𝑛 is 𝑓 𝑥1 , ⋯ , 𝑥𝑛 ; 𝜃 = ς𝑛𝑖=1 𝑓 𝑥𝑖 ; 𝜃 .
Definition: the likelihood function of the sample 𝑘1 , ⋯ , 𝑘𝑛 is given as
𝑛

𝐿 𝜃 = 𝐿 𝜃; 𝑘1 , ⋯ , 𝑘𝑛 = ෑ 𝑓 𝑘𝑖 ; 𝜃
𝑖=1
Caution: 𝐿 𝜃 is a function of theta.

3
The Principle of Maximum Likelihood

Given a data set 𝑘1 , ⋯ , 𝑘𝑛 , it is plausible to choose as the estimate of 𝜃 that value


of the parameter that maximizes the likelihood of the observed sample.
Supporting Reasons:
-Since the sample 𝑘1 , ⋯ , 𝑘𝑛 actually occurred; it has a considerably high likelihood
to occur.
-As the sample size 𝑛 becomes larger, the MLE converges to the true value of 𝜃.
(True under fairly general conditions)

4
Calculating the MLE: Continuous Data

Example: A researcher has reason to believe that the variability in a certain type of
measurement is well explained by the continuous model
1 −𝑥/𝜃 ;
𝑓 𝑥; 𝜃 = 𝑥𝑒 for 0 < 𝑥 < ∞ and 0 < 𝜃 < ∞
𝜃2
The following measurements were collected: 5.6, 12.1, 18.4, 9.2, 10.7
1
Likelihood Function 𝐿 𝜃 = ς𝑛𝑖=1 𝑥 𝑒 −𝑥𝑖 /𝜃
𝜃2 𝑖
𝑛
σ𝑛
= 𝜃 −2𝑛 ෑ 𝑥𝑖 𝑒 −(1/𝜃) 𝑖=1 𝑥𝑖

𝑖=1
(We use the fact: 𝑒 𝑎1 × ⋯ × 𝑒 𝑎𝑛 = 𝑒 𝑎1 +⋯+𝑎𝑛
)
1
Log likelihood Function ln 𝐿 𝜃 = −2𝑛 ln 𝜃 + ln ς𝑛𝑖=1 𝑥𝑖 − σ𝑛𝑖=1 𝑥𝑖
𝜃

5
Calculating the MLE: Continuous Data

1
Log likelihood Function ln 𝐿 𝜃 = −2𝑛 ln 𝜃 + ln ς𝑛𝑖=1 𝑥𝑖 − σ𝑛𝑖=1 𝑥𝑖
𝜃
Key Idea: Optimizing ln 𝐿 𝜃 is equivalent to maximizing 𝐿 𝜃
Get derivative of ln 𝐿 𝜃 , set it to zero, and solve for 𝜃:
𝑛
𝑑 ln 𝐿 𝜃 −2𝑛 1
= + 2 ෍ 𝑥𝑖 = 0
𝑑𝜃 𝜃 𝜃
𝑖=1
MLE is
𝑛
1
𝜃෠ = ෍ 𝑥𝑖
2𝑛
𝑖=1
1
Using the 5 measurements (sum=56) the estimate is 𝜃෠ = × 56.0 = 5.6
2(5)

6
Calculating the MLE: Discrete Data

Suppose that the number of printing jobs that arrive to a network printer in the
past 10 days were: 7, 3, 6, 7, 8, 12, 5, 6, 9, 3.
Suppose also that this number follows a Poisson distribution with parameter 𝜆.
Use the data to find the MLE of 𝜆

7
Calculating the MLE: Discrete Data

𝜆𝑥
Recall that the Poisson pmf is 𝑝 𝑥; 𝜆 = 𝑒 −𝜆 ; 𝑥 = 0,1,2,3, …
𝑥!
σ𝑛
𝜆𝑥𝑖 𝜆 𝑖=0 𝑥𝑖
Likelihood function: 𝐿 𝜆 = ς𝑛𝑖=1 𝑒 −𝜆 = 𝑒 −𝑛𝜆 ς𝑛
𝑥𝑖 ! 𝑖=1 𝑥𝑖 !

Log Likelihood function;


𝑛 𝑛

𝑙 𝜆 = log 𝐿 𝜆 = −𝑛𝜆 + ෍ 𝑥𝑖 log 𝜆 − log ෑ 𝑥𝑖 !


𝑖=0 𝑖=1
1
Derivative 𝑙′ 𝜆 = −𝑛 + σ𝑛𝑖=0 𝑥𝑖 ⋅ = 0
𝜆
σ𝑛
𝑖=0 𝑥𝑖
MLE 𝜆መ =
𝑛

You might also like