EXP Exercise NestedSampling-1
EXP Exercise NestedSampling-1
with L(θ) the likelihood function and π(θ) the prior probability density. (See the bottom
for a stepwise overview of the algorithm.) However, the technique can in principle be used
to compute any (multi-dimensional) integral.
x2 y2
1
F (x, y) = exp − 2 − 2 . (2)
2πσx σy 2σx 2σy
To begin with, pick σx = 0.1 and σy = 0.5. Suppose we want to calculate the integral
Z 0.5 Z 0.5
dx dy F (x, y). (3)
−0.5 −0.5
If you wanted to compute this by means of nested sampling, what quantities can naturally
be identified with evidence, likelihood function, and prior?
Write a nested sampling routine to calculate (3). In doing so,
• Use print statements to monitor how the smallest likelihood for the set of live points
evolves.
You will need to choose a termination condition. Based on what you are seeing, which con-
dition appears to be well-suited? Also play around with increasing the number of live points
used, to see how many points are needed for the end result to not be much affected any more.
2. In order to check to what extent your calculation of the integral (3) is correct, compute
it in some other way, e.g. using Mathematica or Matlab.
1
3. Pick different values for σx , σy from the ones above. How is the nested sampling result
affected depending on whether the integrand F (x, y) is more peaked or less peaked in one
or both variables x, y?
4. The nested sampling process also gives you an approximation of the posterior density
function; in our case, p(xk , yk ) = L(xk , yk )∆Xk /Z, where (xk , yk ) is the live point discarded
in step k of the algorithm, L is the likelihood, ∆Xk the difference in prior mass between
steps k and k − 1, and Z the evidence. Plot p(xk , yk ) as a 2D histogram. Had you expected
the result?
Verify that this prior is normalized. Find a way to randomly draw points from the above
distribution.
6. Implement the non-flat prior of Eq. (4) in your nested sampling code.
1. Initialization: Pick live points θ1 , . . . , θM randomly from the prior density distribution
π(θ).
2. Step k ≥ 0:
2
• If k = 0 then associate with it a prior mass X0 = 1. If k > 0 then associate with
it a prior mass Xk = tk Xk−1 , where Xk−1 is the prior mass from step k − 1, and
tk is drawn randomly from the distribution p(t) = M tM −1 .
• Pick a new point θ 0 randomly from the prior density distribution. Keep doing
this until L(θ 0 ) > Lk ; then add that point to the list of live points.
with ∆Xk = Xk−1 − Xk and N the number of steps taken, and obtain samples of the
posterior density distribution through
Lk ∆Xk
p(θk ) = . (6)
Z