0% found this document useful (0 votes)
4 views

Exercise-Sheet-2

The document is an exercise sheet for a Mathematical Statistics course, detailing various statistical concepts and problems related to sufficiency, method of moments, and maximum-likelihood estimators. It includes exercises on the Neyman criterion for sufficiency, proving sufficiency with measurable mappings, determining point estimators, and factorization of maximum-likelihood estimators. The exercises involve log-normal and Pareto distributions, as well as specific density functions and their implications for statistical modeling.

Uploaded by

ola kuke
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Exercise-Sheet-2

The document is an exercise sheet for a Mathematical Statistics course, detailing various statistical concepts and problems related to sufficiency, method of moments, and maximum-likelihood estimators. It includes exercises on the Neyman criterion for sufficiency, proving sufficiency with measurable mappings, determining point estimators, and factorization of maximum-likelihood estimators. The exercises involve log-normal and Pareto distributions, as well as specific density functions and their implications for statistical modeling.

Uploaded by

ola kuke
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

PD Dr. Lorenz A.

Gilch
Chair of Stochastics
und its Applications

Exercises Mathematical Statistics“



Summer semester 2022
Exercise sheet 2
19.5.2022

1. Neyman criterion for sufficiency:


Let be Pθ = LN(µ, σ 2 ) with parameter θ = (µ, σ 2 ) ∈ R × (0, ∞) , that is, Pθ is the
log-normal distribution on (R, B) with Lebesgue density
2 !
(1) 1 ln(x) − µ
fθ (x) = √ x−1 exp − 1(0,∞) (x), x ∈ R.
2πσ 2 2σ 2

Fix now any n ∈ N. Let X1 , . . . , Xn be independent and identically distributed random


variables with X1 ∼ Pθ . Then X := (X1 , . . . , Xn ) is a random vector with image X = Rn .
Show that
X n n 
2
X 2
T : X → R , (x1 , . . . , xn ) 7→ ln(xi ) , ln(xi ) .
i=1 i=1

is a sufficient statistic for θ in the statistical model X , P , where
 n 
(1)
Y
n
P= fθ : R → [0, ∞) ∀x1 , . . . , xn ∈ R : fθ (x1 , . . . , xn ) = fθ (xi ) .
i=1

2. Sufficiency:
Prove Exercise 2.9 from the lecture:
Let X be a random variable with (discrete or continuous) density fθ in dependence of some
parameter θ. Furthermore, let T : X → T be a sufficient statistic for θ and let b : T → T be
a bijective, measureable mapping which is independent from the parameter θ. We assume
also that b−1 is measurable.
Then b ◦ T also is a sufficient statistic for θ.

3. Method of moments:
Consider the Lebesgue density
2x
fθ (x) = 1[0,θ] (x) · , x ∈ R.
θ2

Let X1 , . . . , Xn be i.i.d. random variables with density fθ . Determine a point estimator θ̂


for θ via the method of moments.

Please turn −→
4. Maximum-likelihood estimator:

a) Let X1 , . . . , Xn ∼ Pa(1, θ) be i.i.d. random variables, where Pa(1, θ) is the Pareto


distribution with parameters 1 and θ ∈ (0, ∞). Determine the ML estimator for θ.
Recall:
The Lebesgue density of Pa(c, θ) is given by fc,θ (x) = θcθ x−(θ+1) 1(c,∞) (x) with para-
meters c, θ, > 0.

b) Let X1 , X2 , . . . , Xn be i.i.d. real-valued random variables. Set α := exp − exp(exp(100)) .
Assume that the distribution of X1 has the Lebesgue density
1−α  1  α  1 
fθ (x)(1) = √ exp − (x − a)2 + √ exp − 2 (x − a)2
2π 2 2πσ 2 2σ

where θ := (a, σ 2 ) ∈ R × (0, ∞) =: Θ. By definition of α, Xi is approximately


N (a, 1)-distributed. Show that
n
Y
n
∀(x1 , . . . , xn ) ∈ R : sup fθ (xi ) = ∞,
θ∈Θ i=1

which implies that the ML estimator for θ does not exist in the model (Rn , P), where
(1)
P = fθ : Rn → [0, ∞) | ∀x1 , . . . , xn ∈ R : fθ (x1 , . . . , xn ) = ni=1 fθ (xi )
 Q

Hint: Choose a = x1 and let σ 2 & 0.

5. Factorisation of ML estimator:
Let X : Ω → X be a random variable with (discrete/continuous) density fθ and let
T : X → T be a sufficient statistic for the parameter θ ∈ Θ. Show that the maximum-
likelihood estimator θ̂M L factorizes over T , that is, there exists a function g such that
θ̂M L = g ◦ T .

You might also like