0% found this document useful (0 votes)
60 views33 pages

Introduction To Bayesian Statistics

This document introduces Bayesian statistics. It discusses the frequentist and Bayesian approaches to parameter estimation, Bayes' theorem, the exponential distribution as an example, and the concept of likelihood. Specifically, it defines the probability density and cumulative density functions of the exponential distribution, shows graphs of these functions, and gives the formula for the likelihood of an i.i.d. sample from an exponential distribution with rate parameter λ.

Uploaded by

Kien
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views33 pages

Introduction To Bayesian Statistics

This document introduces Bayesian statistics. It discusses the frequentist and Bayesian approaches to parameter estimation, Bayes' theorem, the exponential distribution as an example, and the concept of likelihood. Specifically, it defines the probability density and cumulative density functions of the exponential distribution, shows graphs of these functions, and gives the formula for the likelihood of an i.i.d. sample from an exponential distribution with rate parameter λ.

Uploaded by

Kien
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Introduction

Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Introduction to Bayesian Statistics

Van-Cuong DO
School of Applied Mathematics and Informatics
Hanoi University of Science and Technology

VIASM, 10-13 December 2020

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Frequentist Approach vs. Bayesian Approach

Let P = {Pθ | θ ∈ Θ} a parametric model and X = {x = (x1 , . . . , xn ) |


n ∈ N} a sample space. We consider a problem of estimating the parameter
θ given the data x .
Frequentist: parameter is unknown but fixed.
Classical methods of estimation: maximum likelihood estimation
(MLE), moments method, least squares method,...
Bayesian: parameter is a random variable.
Prior choices: Noninformative prior, conjugate prior,...

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Bayes’ Theorem

Denote π(θ) the prior distribution of parameter θ, f (x | θ) the probability


of a realization x , p(θ | x ) the posterior distribution of parameter θ given
the data x . The continous form of Bayes’ Theorem says

f (x | θ) × π(θ)
p(θ | x ) = ∫ . (0.1.1)
Θ
f (x | θ) × π(θ)dθ

That is, posterior distribution is proportional to the product of the likeli-


hood and prior distribution

p(θ | x ) ∝ f (x | θ) × π(θ). (0.1.2)

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Exponential distribution

Definition
A random variable X ∈ R+ is said to be distributed as Exponential
distribution with rate parameter λ if it has density of the form

fX (x ) = λe −λx (0.1.3)

We denote: X ∼ Exp(α, β).


Sometimes we use the parametrization with θ = 1/λ and name it scale
parameter. The density function is then
1 −x /θ
fX (x ) = e (0.1.4)
θ

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Probability density function and cummulative density


function of Exponential distribution

Figure: PDF plot of Exponential Figure: CDF of the Exponential


distribution with different rate distribution different different rate
parameter λ parameter λ

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Likelihood

Let x = (x1 , . . . , xn ) be the an i.i.d realization of Exponential distribution


with rate parameter λ. The likelihood is the probability of the realization
of n i.i.d Exponential distributed random variables


n ∑n
f (x | λ) = f (xi | λ) = λn e −λ i=1
xi
.
i=1
∑n
Denote sn = i=1 xi then the likelihood of Exponential distribution with
rate parameter λ is

f (x | λ) = λn e −sn λ . (0.1.5)

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Maximum Likelihood Estimation


Then the log-likelihood function is
l(λ) = log(L(λ)) = n log(λ) − sn λ. (0.1.6)
The first-order partial derivative and the second-order partial derivative are
∂ n
l(λ) = − sn ,
∂λ λ
∂ 2
−n
l(λ) = .
∂λ2 λ2

Denote λ̂MLE the Maximum Likelihood Estimate and I(λ) the Fisher In-
formation Matrix then
n
λ̂MLE = , (0.1.7)
sn
[ 2 ]
∂ n
I(λ) = −E 2
l(λ) = 2 . (0.1.8)
∂λ λ. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Properties of MLE

Λ̂MLE is a biased estimator of λ.


n
Since Λ̂MLE = has Inverse-gamma distribution with parameter
Sn
(n, nλ), it’s expectation is
[ ] n
E Λ̂MLE = λ.
n−1
Sn
Λ̂MLE is a consistent estimator of λ since converges almost surely
n
to E (X ) = 1/λ.

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Jeffreys’ rule

The Jeffreys’ prior for vector-parameter θ is proportional to the square-root


of the determinant of the Fisher information matrix

π(θ) ∝ det[I(θ)].

Therefore, a non-informative prior for the likelihood of Exponential distri-


bution is
1
π(λ) ∝ .
λ

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Noninformative prior

We consider Bayesian inference for the likelihood of Exponential distribu-


tion with noninformative prior.
Theorem
Let x = (x1 , . . . , xn ) be an i.i.d realization from Exponential distribution.
Denote λ̃NBE the noninformative Bayesian estimate of λ. Assuming a
quadratic loss and a Jeffreys’ noninformative prior π(λ) ∝ λ1 , we obtain
the following results:
(i) the posterior distribution is Gamma(n, sn )
n
(ii) the Bayesian estimate is λ̃NBE =
sn

Here we find a classical result: Bayesian estimate of λ is equal to its MLE


when there is no prior information.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Narural Conjugate Prior

Consider the likelihood of Exponential distribution with rate parameter λ

f (x | λ) = λn e −sn λ .

We mimic the functional form of the likelihood, a natural conjugate prior


family should be in the form

π(λ) ∝ λa−1 e −bλ .

This conjugate prior looks like a familiar distribution, that is Gamma dis-
tribution.

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Gamma distribution

Gamma distribution is a univariate distribution which density function has


a closed-form expression.
Definition
A random variable X ∈ R+ is said to be distributed as Gamma
distribution with two-parameter (α, β) if it has density of the form

fX (x ) = K x α−1 e −βx (0.3.9)

where α, β > 0 and K = β α /Γ(α).

Here, α is called shape parameter and β is called rate parameter.


We denote: X ∼ Gamma(α, β). Some time we use the parametrization
with θ = 1/β and name it scale parameter.

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Probability density function and cummulative density


function of Gamma distribution

Figure: PDF plot of Gamma Figure: CDF of the Gamma distribution


distribution with different shape different shape parameter k and scale
parameter k and scale parameter θ parameter θ

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Expectations and Variance


One can easily compute the expectation and the variance of X .
Theorem
Let X ∼ Gamma(α, β) then:
α
E (X ) = ,
β
α
Var (X ) = 2 . (0.3.10)
β

From this theorem, one can deduce the two parameters of Gamma distri-
bution if the expectation and the variance are known.
[E (X )]2
α= ,
Var (X )
E (X )
β= .
Var (X )
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Gamma distribution as natural conjugate prior for


Bayesian analysis of Exponential distribution

The following theorem shows that the Gamma distribution is a natural


conjugate prior of Exponential distribution.
Theorem
The natural conjugate prior for Bayesian analysis of the Exponential
distribution is Gamma distribution with parameters (a, b) and the
posterior is a Gamma distribution with parameters (a + n, b + sn ).

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Proof of the theorem

Recall that the likelihood is

f (x | λ) = λn e −sn λ .

If the random variable λ is distributed as Gamma distribution with param-


eter (a, b) then it has the density

π(λ) ∝ λa−1 e −bλ .

The posterior is proportinal to the product of the likelihood and the prior

p(λ | x ) ∝ f (x | λ) × π(λ) = λa+n−1 e −(b+sn )λ .

It indicates that λ | x is distributed as Gamma distribution with parameter


(a + n, b + sn ).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Bayesian Estimator

Assuming quadratic loss, the Bayesian estimators are the expectation of the
posterior distributions. Denote λ̃CBE the Bayesian estimate with conjugate
prior of parameter λ then
a+n
λ̃CBE = . (0.3.11)
b + sn

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Convex combination

Recall that
a
E (λ) = ,
b
n
λ̂MLE = .
sn

We find that λ̃CBE can be expressed as a convex combination of the MLE


and the expectation of the prior distribution

λ̃CBE = ξ λ̂MLE + (1 − ξ)E (λ) (0.3.12)


sn
where ξ = .
b + sn

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Prior elicitation for gamma prior distribution


Prior elicitation is easily deduced from prior information of the parameter.
Let Gamma(a, b) is conjugate prior for the parameter θ. Let gλ,1 is a guess
for the value of θ and gλ,2 is a guess for standard deviation associated with
gλ,1 . Since the expectation and the variance of a gamma distribution are
available in closed-form expressions, one can easily obtained values for a, b
as following.
The expectation and the variance of a gamma distributed r.v X are
a a
E (X ) = , Var (X ) = 2 .
b b
Hence
a a 2
= gλ,1 , = gλ,2 .
b b2
Therefore we get
2
gλ,1 gλ,1
a= 2 , b= 2 . (0.3.13)
gλ,2 gλ,2
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

We reparametrize θ = 1/λ, the Exponential distribution with scale param-


eter θ has density as
1 −x /θ
fX (x ) = e (0.3.14)
θ
The likelihood becomes

f (x | θ) = θ−n e −sn /θ . (0.3.15)

We mimic the functional form of the likelihood, a natural conjugate prior


family should be in the form

π(θ) ∝ θ−a−1 e −b/θ .

This conjugate prior looks like a familiar distribution, that is Inverse-gamma


distribution.
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Inverse-gamma distribution

Inverse-gamma distribution is a univariate distribution which density func-


tion has a closed-form expression.
Definition
A random variable X ∈ R+ is said to be distributed as Inverse-gamma
distribution with two-parameter (a, b) if it has density of the form

fX (x ) = K x −α−1 e −x /β (0.3.16)

where α, β > 0 and K = β α /Γ(α).

We name α shape parameter and β scale parameter and denote: X ∼


IGamma(α, β).
If X has Gamma distribution with parameter (α, β) then 1/X has Inverse-
gamma distribution with parameter (α, β).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Probability density function and cummulative density


function of Inverse-gamma distribution

Figure: PDF plot of Inverse-gamma Figure: CDF of Inverse-gamma


distribution with different shape distribution different shape parameter
parameter and rate parameter and rate parameter

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Expectations and Variance


One can easily compute the expectation and the variance of X .
Theorem
Let X ∼ IGamma(α, β) then:

β
E (X ) = ,
α−1
β2
Var (X ) = . (0.3.17)
(α − 1)2 (α − 2)

From this theorem, one can deduce the two parameters of Gamma distri-
bution if the expectation and the variance are known.
[E (X )]2
α=1+ ,
Var (X )
[E (X )]3
β= .
Var (X ) . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Inverse-gamma distribution as natural conjugate prior for


Bayesian analysis of Exponential distribution

The following theorem shows that the Inverse-gamma distribution is a


natural conjugate prior of Exponential distribution with scale parameter θ.
Theorem
The natural conjugate prior for Bayesian analysis of the Exponential
distribution is Inverse-gamma distribution with parameters (a, b) and the
posterior is a Inverse-gamma distribution with parameters (a + n, b + sn ).

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Proof of the theorem

Recall that the likelihood is

f (x | θ) = θ−n e −sn /θ .

If the random variable θ is distributed as Inverse-gamma distribution with


parameter (a, b) then it has the density

π(θ) ∝ θ−a−1 e −b/θ .

The posterior is proportinal to the product of the likelihood and the prior

p(θ | x ) ∝ f (x | θ) × π(θ) = θ−a−n−1 e −(b+sn )/θ .

It indicates that θ | x is distributed as Inverse-gamma distribution with


parameter (a + n, b + sn ).
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Bayesian Estimator

Assuming quadratic loss, the Bayesian estimators are the expectation of the
posterior distributions. Denote θ̃CBE the conjugate prior Bayesian estimate
for Exponential distribution with scale parameter θ then
b + sn
θ̃CBE = . (0.3.18)
a+n−1

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Convex combination

Recall that
b
E (θ) = ,
a−1
sn
θ̂MLE = .
n

We find that θ̃CBE can be expressed as a convex combination of the MLE


and the expectation of the prior distribution

θ̃CBE = ξ θ̂MLE + (1 − ξ)E (θ) (0.3.19)


n
where ξ = .
a+n−1

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Prior elicitation
Narural Conjugate Prior Reparametrization
Simulation
Conclusion

Prior elicitation for Inverse-gamma prior distribution


Let IGamma(a, b) is conjugate prior for the parameter θ. Let gθ,1 is a guess
for the value of θ and gθ,2 is a guess for standard deviation associated with
gθ,1 .
The expectation and the variance of an Inverse-gamma distributed X are

b b2
E (X ) = , Var (X ) = .
a−1 (a − 1)2 (a − 2)
One can easily deduce values for a, b as following:

b b2 2
= gθ,1 , = gθ,2 .
a−1 (a − 1)2 (a − 2)
Therefore we get
2 3
gθ,1 gθ,1
a =1+ 2 , b= 2 . (0.3.20)
gθ,2 gθ,2
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Exponential distribution: small sample size

Table: Mean of the Bayesian estimates for simulated exponential distribution


(input intensity λ = 0.5)

Sample-size Prior guess Bayes estimates


n gβ,1 gβ,2 = ρgβ,1 β̃ µ̃
10 0.1 0.03 0.1549
0.06 0.2412
0.09 0.2992
0.5 0.15 0.4451
0.30 0.4154
0.45 0.4060
0.9 0.27 0.5622
0.54 0.4516
0.81 0.4227
MLE 0.3967
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior
Narural Conjugate Prior
Simulation
Conclusion

Exponential distribution: large sample size

Table: Mean of the Bayesian estimates for simulated exponential distribution


(input intensity λ = 0.5)

Sample-size Prior guess Bayes estimates


n gβ,1 gβ,2 = ρgβ,1 β̃ µ̃
1000 0.1 0.03 0.4815
0.06 0.4973
0.09 0.5003
0.5 0.15 0.5022
0.30 0.5027
0.45 0.5028
0.9 0.27 0.5053
0.54 0.5034
0.81 0.5031
MLE 0.5028
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Noninformative Prior
Narural Conjugate Prior Conjugate Prior
Simulation
Conclusion

Noninformative Prior for the Likelihood of Exponential


Distribution
Noninformative prior for the likelihood of Exponential distribution is
1
π(λ) ∝ .
λ
Noninformative prior for the likelihood of Normal distribution is

π(µ) ∝ 1.

Noninformative prior for the likelihood of Homogeneous Poisson


process is
1
π(λ) ∝ .
λ
Bayesian Estimate with noninformative prior (NBE) = Maximum
Likelihood Estimate (MLE)
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Noninformative Prior
Narural Conjugate Prior Conjugate Prior
Simulation
Conclusion

Conjugate Priors

Conjugate prior for the likelihood of Exponential distribution is


gamma distribution or Inverse gamma distribution
(reparametrization).
Conjugate prior for the likelihood of Normal distribution (σ known)
is Normal distribution.
Conjugate prior for the likelihood of Homogeneous Poisson process is
gamma distribution or Inverse gamma distribution
(reparametrization).
Conjugate Bayesian Estimate = a convex combination of Maximum
Likelihood Estimate and Prior Expectation.

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics
Introduction
Noninformative Prior Noninformative Prior
Narural Conjugate Prior Conjugate Prior
Simulation
Conclusion

References

1. William M. Bolstad. Introduction to Bayesian statistics, 2nd Edition.


Wiley, 2007.
2. Allen B. Downley. Think Bayes. Green Tea Press, 2012.
3. Nguyen Van Tuan. Phan tich so lieu voi R. NXB tong hop TP HCM,
2014, p.369.

. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
Van-Cuong DO School of Applied Mathematics and Informatics Hanoi University of
Introduction
Science and
to Technology
Bayesian Statistics

You might also like